* RedisConfig -> Redis
* moved redis config to seperate file
* bugfix in config test during parallel processing
* implement config.Configurable in Redis config
* use Context in GetRedisCache
* use Context in New
* caching resolver test fix
* use Context in PublishEnabled
* use Context in getResponse
* remove ctx field
* bugfix in api interface test
* propperly close channels
* set ruler for go files from 80 to 111
* line break because function length is to long
* only execute redis.New if it is enabled in config
* stabilized flaky tests
* Update config/redis.go
Co-authored-by: ThinkChaos <ThinkChaos@users.noreply.github.com>
* Update config/redis_test.go
Co-authored-by: ThinkChaos <ThinkChaos@users.noreply.github.com>
* Update config/redis_test.go
Co-authored-by: ThinkChaos <ThinkChaos@users.noreply.github.com>
* Update config/redis_test.go
Co-authored-by: ThinkChaos <ThinkChaos@users.noreply.github.com>
* Update config/redis.go
Co-authored-by: ThinkChaos <ThinkChaos@users.noreply.github.com>
* Update config/redis_test.go
Co-authored-by: ThinkChaos <ThinkChaos@users.noreply.github.com>
* fix ruler
* redis test refactoring
* vscode setting cleanup
* removed else if chain
* Update redis_test.go
* context race fix
* test fail on missing seintinel servers
* cleanup context usage
* cleanup2
* context fixes
* added context util
* disabled nil context rule for tests
* copy paste error ctxSend -> CtxSend
* use util.CtxSend
* fixed comment
* fixed flaky test
* failsafe and tests
---------
Co-authored-by: ThinkChaos <ThinkChaos@users.noreply.github.com>
Move `startVerifyUpstream` to `upstreams.startVerify` so it's accessible
via `UpstreamGroup` and we don't need to pass `startVerify` to all
resolver constructors that call `NewUpstreamResolver`.
Also has the nice benefit of greatly reducing the usage of `GetConfig`.
- `CacheControl.FlushCaches`
- `Querier.Query`
- `Resolver.Resolve`
Besides all the API churn, this leads to `ParallelBestResolver`,
`StrictResolver` and `UpstreamResolver` simplification: timeouts only
need to be setup in one place, `UpstreamResolver`.
We also benefit from using HTTP request contexts, so if the client
closes the connection we stop processing on our side.
* extension cleanup & added ginkgo watch
* added gcov2lcov
* added test explorer and reworked scripts
* go mod tidy
* use package cache volume
* script rework
* defined tasks
* defined launch
* don't try to convert if test was canceld
* generate lcov only in devcontainer
* disable coverage upload on forks
* wip: make lcov
* fixed unit tests for parallel
* parallel test for lists
* fix serve test for parallel
* parallel test fixes
* deleted accident commit
* wip: make lcov
* restructured settings location
* start script refactoring
* added GetProcessPort
* fixed parallel ports
* race fix
* changed port for github runner binding
* fixed local list var in test
* more local vars in tests fix
* less local vars
* run test & race parallel
* removed invalid error check
* fixed error check
* less local variables
* fixed timing problem
* removed gcov2lcov
* added generate-lcov
* added GINKGO_PROCS to makefile
* fixed workflow
* run generate-lcov on save *.go
* added tooltitude