Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow end user to control badger's memory usage #3

Open
Avokadoen opened this issue Feb 9, 2021 · 2 comments
Open

Allow end user to control badger's memory usage #3

Avokadoen opened this issue Feb 9, 2021 · 2 comments
Labels
enhancement New feature or request

Comments

@Avokadoen
Copy link
Contributor

Avokadoen commented Feb 9, 2021

Is your feature request related to a problem? Please describe.
Currently the gowarcserver's can consume more memory than they should in their host container

Describe the solution you'd like
Badger has an extensive API that should enable a solution where the end user can configure the memory usage on startup using arguments and/or the config file. @maeb has a previous attempt at this which can be found on over at the gowarc repo (link is not permanent, so it might die at some point).

Additional context
A good place to start might be badgers documentation entry on memory usage https://dgraph.io/docs/badger/get-started/#memory-usage

All option fields in badger v2.2007.2:
https:/dgraph-io/badger/blob/d5a25b83fbf4f3f61ff03a9202e36f5b75544426/options.go#L35

// Required options.
Dir      string
ValueDir string

// Usually modified options.
SyncWrites          bool
TableLoadingMode    options.FileLoadingMode
ValueLogLoadingMode options.FileLoadingMode
NumVersionsToKeep   int
ReadOnly            bool
Truncate            bool
Logger              Logger
Compression         options.CompressionType
InMemory            bool

// Fine tuning options.

MaxTableSize        int64
LevelSizeMultiplier int
MaxLevels           int
ValueThreshold      int
NumMemtables        int
// Changing BlockSize across DB runs will not break badger. The block size is
// read from the block index stored at the end of the table.
BlockSize          int
BloomFalsePositive float64
KeepL0InMemory     bool
BlockCacheSize     int64
IndexCacheSize     int64
LoadBloomsOnOpen   bool

NumLevelZeroTables      int
NumLevelZeroTablesStall int

LevelOneSize       int64
ValueLogFileSize   int64
ValueLogMaxEntries uint32

NumCompactors        int
CompactL0OnClose     bool
LogRotatesToFlush    int32
ZSTDCompressionLevel int

// When set, checksum will be validated for each entry read from the value log file.
VerifyValueChecksum bool

// Encryption related options.
EncryptionKey                 []byte        // encryption key
EncryptionKeyRotationDuration time.Duration // key rotation duration

// BypassLockGaurd will bypass the lock guard on badger. Bypassing lock
// guard can cause data corruption if multiple badger instances are using
// the same directory. Use this options with caution.
BypassLockGuard bool

// ChecksumVerificationMode decides when db should verify checksums for SSTable blocks.
ChecksumVerificationMode options.ChecksumVerificationMode

// DetectConflicts determines whether the transactions would be checked for
// conflicts. The transactions can be processed at a higher rate when
// conflict detection is disabled.
DetectConflicts bool

// Transaction start and commit timestamps are managed by end-user.
// This is only useful for databases built on top of Badger (like Dgraph).
// Not recommended for most users.
managedTxns bool

// 4. Flags for testing purposes
// ------------------------------
maxBatchCount int64 // max entries in batch
maxBatchSize  int64 // max batch size in bytes
@Avokadoen Avokadoen added the enhancement New feature or request label Feb 9, 2021
@Avokadoen Avokadoen self-assigned this Feb 11, 2021
Avokadoen added a commit to Avokadoen/gowarcserver that referenced this issue Feb 25, 2021
This allows the user to set the compression type so save memory footprint at the cost of cpu cycles
Avokadoen added a commit to Avokadoen/gowarcserver that referenced this issue Feb 25, 2021
This allows the user to set the compression type so save memory footprint at the cost of cpu cycles
Avokadoen added a commit to Avokadoen/gowarcserver that referenced this issue Feb 25, 2021
This allows the user to set the compression type so save memory footprint at the cost of cpu cycles
Avokadoen added a commit to Avokadoen/gowarcserver that referenced this issue Mar 1, 2021
This allows the user to set the compression type so save memory footprint at the cost of cpu cycles
Avokadoen added a commit to Avokadoen/gowarcserver that referenced this issue Mar 1, 2021
This allows the user to set the compression type so save memory footprint at the cost of cpu cycles
@Avokadoen Avokadoen removed their assignment Mar 28, 2021
@Avokadoen Avokadoen linked a pull request Apr 3, 2021 that will close this issue
@Avokadoen Avokadoen removed a link to a pull request Apr 3, 2021
Avokadoen added a commit to Avokadoen/gowarcserver that referenced this issue Apr 20, 2021
The code was outdated and had to be modified to make sense with master
@Avokadoen
Copy link
Contributor Author

Related pr #10

@Avokadoen
Copy link
Contributor Author

Note: version of badger has changed drastically since this issue was filed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant