Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimistic Provide #783

Merged
merged 30 commits into from
Apr 5, 2023
Merged

Optimistic Provide #783

merged 30 commits into from
Apr 5, 2023

Conversation

dennis-tra
Copy link
Contributor

@dennis-tra dennis-tra commented Aug 3, 2022

This is the pull request for the Optimistic Provide project. It contains

  1. A package for network size measurements - netsize
  2. A new top level API OptimisticProvide

Highlights

  • >23x and >32x provide speed-up for the 50th and 90th percentile respectively (0.87s and 1.85s)
  • Global network size estimates from local observations without additional network overhead
  • Completely backwards compatible - update your node and benefit
  • Time to first network-wide available provider record consistently two orders of magnitude faster
    though that may be a bit misleading... see below

Network Size Measurements

Peers refresh their routing table periodically every 10min. This refresh-process iterates through all buckets of the routing table and in each iteration it generates a random CID that falls into that bucket and searches for the 20 closest peers to that CID. We are using the distances of the 20 closest peers to the randomly generated CIDs to derive a network size estimate. How we do this is described here.

Since the routing table maintenance happens anyways this estimate incurs no additional networking overhead

Optimistic Provide

The optimistic provide process uses the regular Provide algorithm but just defines an additional termination condition that gets checked every time we receive a new peers during the DHT-Walk.

In this termination logic we define two thresholds:

  • individualThreshold - If the distance of a peer to the CID that we provide is below this threshold we store the provider record with it right away - but continue the DHT-Walk
  • setThreshold - If the average distance of the currently known 20 closest peers is below that threshold we terminate the DHT-Walk

After DHT-Walk termination we take the 20 closest peers from the query set and attempt to store the provider record with all that have not been contacted due to their distance being less than the individualThreshold.

Important addition: If 15 of the 20 ADD_PROVIDER RPCs finished (regardless of success) we return to the user and continue with the remaining five in the background. At this point the provider record can already be found and the probability is high that we run into at least one timeout for remaining five. In the below graphs, we differentiate these times between return and done. done is the time until the 20th provider record was stored.

Measurements

Durations

image

Percentile Classic Optimistic (return) Optimistic (done)
50th 20.32 0.87s 5.63
90th 59.58 1.85s 25.50
99th 180.00 20.14 60.00

Speed Up

image

Precision

image

CDF of the selected peers distances across all provide operations

Successful ADD_PROVIDER RPCs

image

Time to First Successfully Written Provider Record

image

The above graph shows the time it takes until the content is theoretically available in the network. Though the graph is a bit misleading as one provider record may not suffice and the PutProvider function call returns immediately, so the time measurement underestimates the latencies. However, we're comparing apples with apples here as the same applies to both approaches.

Corresponding Speed Up:

image

Network Size Measurements

image

Compare this with the same time period here from Nebula.

Limitations

  • It can take a few minutes until the node has enough data for network size estimate.
  • The network size measurements behave differently for the Filecoin and IPFS networks. In the IPFS network, the measurements are consistently lower than the total number of crawled peers. In the Filecoin network, the estimates wobble around the total number of crawled peers. This can be seen here.

Implementation Details

  • I've used the stopFn hook to implement a custom termination logic
  • Nothing's tested yet, tbh I'd have trouble to provide proper test coverage for all the asynchronicity. The netsize package could be tested but I had trouble crafting CIDs/PeerIDs. I think I could just use some of the measurement above.
  • I think it's save to use the queryPeerset in the stopFn without paying attention to locking etc. If that's not the case, I have a thread-safe implementation lying around somewhere.
  • The netsize package draws inspiration from the gonum weighted mean and linear regression implementations. I wanted to avoid another dependency.
  • This didn't work out for the "inverse of the regularized incomplete Gamma integral" in lookup_estim.go to compute the distance thresholds. So, I pulled-in gonum...
  • There are other top-level APIs that could make use of this approach but I wanted to wait for feedback before I continue.

Possible Next Steps

  • Retrieval Success Rate Measurement - Answering the question, are we still able to retrieve content as reliably as with the classic implementation?
  • Document the theory behind the threshold calculations here. Answering the question, why the heck do we need the "inverse of the regularized incomplete Gamma integral".
  • Find out why network size measurements in the IPFS network are consistently lower than the total number of peers.
  • A little gossiping of network size estimates (maybe five estimates from other peers?) could significantly reduce the variance -> measure
  • Find out how to propagate the network size estimate error to the threshold calculations
  • Find out how the network size estimate behaves in extreme network conditions
  • A notion of global network size can also be used for other things - eclipse/sybil detection?

Note: There was another approach that was using multiple queries instead of these estimations but we dropped that one due to these results.

@dennis-tra dennis-tra marked this pull request as ready for review August 4, 2022 08:44
@guseggert guseggert self-requested a review September 8, 2022 14:12
@guseggert
Copy link
Contributor

guseggert commented Oct 10, 2022

If 15 of the 20 ADD_PROVIDER RPCs finished (regardless of success) we return to the user and continue with the remaining five in the background.

This can be difficult to deal with operationally because the work for handling the request can't exert backpressure and control flow isn't tied to the req (which e.g. complicates debugging).

One alternate approach here might be to kick off 21 or 22 ADD_PROVIDER RPCs and stop when 20 succeed. This might generate slightly more load on DHT servers, but how much and can we find a way to make it negligible?

Also if we do the work async, we should make sure it's using a bounded work queue, have metrics on the work queue, and drop work on the floor when the queue is full. (May not be literally a work queue, e.g. could be goroutines + semaphore.)

@dennis-tra
Copy link
Contributor Author

Thanks for the review @guseggert

This can be difficult to deal with operationally because the work for handling the request can't exert backpressure and control flow isn't tied to the req (which e.g. complicates debugging).

Valid points :/ this wouldn't be the first time though in the libp2p stack :D (this obviously doesn't justify it here).

One alternate approach here might be to kick off 21 or 22 ADD_PROVIDER RPCs and stop when 20 succeed. This might generate slightly more load on DHT servers, but how much and can we find a way to make it negligible?

I hoped we can avoid any additional networking overhead but that'd certainly work. Right now, we return to the user when we received responses from 15 peers while we send the request to 20. If we wanted the same ratio and only return after we have received responses from 20 peers we'd need to shoot for 26.6 peers.

This sounds a lot (at least to me) but now I'm thinking that we're saving so many requests (not quantified) in the DHT walk that we'd presumably still be net negative in terms of networking overhead.

The graph Successful ADD_PROVIDER RPCs shows that if we only send the request to 22 peers we'd still run into a timeout in around 70% of the cases (with 26-27 peers <10%).

(Just a note: A finding from another project is that we actually only need 15 provider records)

Also if we do the work async, we should make sure it's using a bounded work queue, have metrics on the work queue, and drop work on the floor when the queue is full. (May not be literally a work queue, e.g. could be goroutines + semaphore.)

Just to understand you correctly: We could have a limit on the number of in-flight ADD_PROVIDER RPCs that stick around after control is handed back to the user. We can't add more RPCs to that "queue" (as you say it's not really a queue) if that limit is reached. In this case, the user would need to wait for all 20 RPCs to complete - as normal.

Is this would you mean?

@BigLep
Copy link

BigLep commented Oct 18, 2022

@guseggert : did you have other code review comments you were going to provide?

Copy link
Contributor

@Jorropo Jorropo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That the code quality things that sticked out to me.

dht_options.go Outdated Show resolved Hide resolved
internal/config/config.go Show resolved Hide resolved
lookup_estim.go Outdated Show resolved Hide resolved
lookup_estim.go Outdated Show resolved Hide resolved
lookup_estim.go Outdated Show resolved Hide resolved
routing.go Outdated
@@ -441,6 +441,23 @@ func (dht *IpfsDHT) Provide(ctx context.Context, key cid.Cid, brdcst bool) (err
return ctx.Err()
}

func (dht *IpfsDHT) OptimisticProvide(ctx context.Context, key cid.Cid) (err error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would not expose this as a second method, now every caller needs to have it's if check to know what to call.

I would do if dht.enableOptProv { in the Provide method, and then dispatch to either classical or optimistic client.
I do not think we are gonna start having provides where some of them can be done optimistically and other where we really want them to be precise.

Copy link
Contributor Author

@dennis-tra dennis-tra Feb 21, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed public API and added the following to the Provide method:

	if dht.enableOptProv {
		err := dht.optimisticProvide(ctx, keyMH)
		if errors.Is(err, netsize.ErrNotEnoughData) {
			logger.Debugln("not enough data for optimistic provide taking classic approach")
			return dht.classicProvide(ctx, keyMH)
		}
		return err
	}
	return dht.classicProvide(ctx, keyMH)

I thought of falling back to the classic approach if we don't have enough data points for the optimistic one.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Jorropo IMO a second method makes sense as OptimisticProvide isn't exactly the same use case as the Provide operation (see this comment). WDYT?

Copy link
Contributor

@Jorropo Jorropo Mar 21, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@guillaumemichel will we write code anytime soon that is Optimistic Provide aware ?

By that I mean will we have code that use different publish strategy for different workloads ?

I don't belive so, thus no need to make the API more complex and require all consumers to purposely call the right method.

Copy link
Contributor

@guillaumemichel guillaumemichel Mar 21, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will we write code anytime soon that is Optimistic Provide aware ?

I don't think so, as IMO kubo isn't the best use case for Optimistic Provide.

However, if other applications want to use it (the goal of this feature is still to be useful), let's not screw their reprovide operation, otherwise no one will want to use this feature at all.

IMO the job achieved by optimistic provide is different as the one achieved by the normal Provide, and each have their use cases, let users use both.

lookup_estim.go Outdated
case es.dht.optProvJobsPool <- struct{}{}:
// We were able to acquire a lease on the optProvJobsPool channel.
// Consume doneChan to release the acquired lease again.
go es.consumeDoneChan(rpcCount)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why launch a goroutine here ?
Can't putProviderRecord do the:

	// Release acquired lease for other's to get a spot
	<-es.dht.optProvJobsPool

	// If all RPCs have finished, close the channel.
	if int(es.putProvDone.Add(1)) == until {
		close(es.doneChan)
	}

by itself ? If that is because it used twice (once in the non optimistic path and once in the optimistic path).
Then just make two functions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe with optimistic vs. non-optimistic code paths you mean

  1. optimistic: the putProviderRecord call in the stopFn
  2. non-optimistic: the putProviderRecord call after runLookupWithFollowup has returned

When we spawn the putProviderRecord go-routines we don't know in either case (optimistic/non-optimistic code path) if this will need to release the acquired lease. So I don't think we can put your proposed lines in there.

Imagine a very long-running putProviderRecord call in the optimistic code path. This will eventually acquire a lease on the job pool and need to release it. On the other hand, a short-running call would not need to do that.

lookup_estim.go Outdated
go es.consumeDoneChan(rpcCount)
case <-es.doneChan:
// We were not able to acquire a lease but an ADD_PROVIDER RPC resolved.
if int(es.putProvDone.Add(1)) == rpcCount {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do the inverse, set es.PutProvDone to rpcCount at the start, and have workers do:

var minusOne uint32
minusOne-- // go doesn't want to constant cast -1 to an uint32, that does the same thing, this also gets folded.
if es.putProvDone.Add(minusOne) == 0  {

Also I guess putProvDone should be renamed putProvInflight

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the challenge with both your suggestions is that we don't know ahead of time if we are going to acquire a lease for an inflight putProviderRecord call. We would need to inflight communicate to it that we acquired a lease for it so that it'll release all resources for us.

I think I see where you want to go with both of your suggestions but I can't see a simpler way to do that.

lookup_estim.go Outdated
es.peerStatesLk.Unlock()

// wait until a threshold number of RPCs have completed
es.waitForRPCs()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happen if less than some minimum of puts fails ?
Let's say all of the puts fail, I expect to return an error the consumer (the network is probably broken), I would expect es.waitForRPCs to synchronise that and have it return an error. (this doesn't have to be waitForRPCs precisely)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The behaviour I have implemented here mimics how the classic approach handles failed put - it does not handle them.

We could indeed define a lower limit for when we consider a provider operation successful. But what would be that number, 1, 5, 10?

Then should the implementation try its best to increase the number if it detects that it only provided to less than this threshold?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would involve quite a bit of work to implement this behaviour.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My 2c: I think it's fine to stay consistent with the classic approach and treat this as a separate issue.

lookup_estim.go Outdated Show resolved Hide resolved
@dennis-tra
Copy link
Contributor Author

tests are failing because atomic.Int32 was only introduced in Go 1.19

@iand
Copy link

iand commented Feb 24, 2023

tests are failing because atomic.Int32 was only introduced in Go 1.19

You can add an implementation for older versions of Go, using a build tag

@guillaumemichel
Copy link
Contributor

guillaumemichel commented Feb 24, 2023

If you rebase the PR on the master you will have tests for go1.19 and go1.20. This has been updated in #812

@guseggert
Copy link
Contributor

Sorry if this was mentioned here before and I missed it, but do we need to reconsider anything here in light of the recent DHT issues? I remember discussion about how the impact might have been much worse if much of the network was using optimistic provide for publishing records.

@dennis-tra
Copy link
Contributor Author

dennis-tra commented Mar 16, 2023

Sharing the relevant graph from Engelberg:

image

These graphs show the differences between Optimistic Providing vs. Classic Providing for the IPFS and Filecoin networks. The context was that >60% of nodes in the IPFS network were not behaving well (unreachable). The graphs show that the classic approach was generally able to store more provider records with nodes in the network. However, at the cost of a higher distance to the target key. The following graph shows that:

output

So, the classic approach stores more records but at potentially unsuitable locations that don't lie on the path to the target key. I'm hesitant to jump to the conclusion that it probably isn't that bad. I would love to back any conclusion up with some data, but I don't know how we would do that. @guillaumemichel to the rescue? :D So, I think it's hard to judge if optimistic provide is actually much worse.

@FotiosBistas did a study for his bachelor's thesis on the retrieval success rate using optimistic provide. He performed his study during the "challenging" network times. He did a test with 1.5k random CIDs. This is his finding:

image

So, "almost all" provider records stayed retrievable. "Almost all" is not super reassuring, but he also suggests that this might be a measurement issue.


I also think that with these two additions:

we wouldn't run into the problem that we observed back then.

@guseggert
Copy link
Contributor

Okay if you and @guillaumemichel are on board with this, in light of those findings, then I'm happy to proceed.

Should we land #811 before this?

@guillaumemichel
Copy link
Contributor

I agree that there is much room for improvement in the provide process.

What is the specific goal for optimistic provide?


Accelerating the provide operation can be decomposed in multiple components:

  1. Store the first provider records earlier on remote peers (content is available earlier in the network).
  2. The function returns faster to the caller.
  3. Store the provider records on nodes that are fast to answer.

1 Makes a lot of sense, and I think that Optimistic Provide is doing it right (didn't look at the implementation yet though).

2 represents return once 15 out of the 20 Provider Records have been allocated. I think that the benefit of a function returning fast depends on applications. e.g if you want to advertise a file temporarily for a one time transfer, it doesn't make sense to wait for minutes before the remote peer can start looking for the provider record. However, when periodically reproviding content, no one is waiting for the operation to terminate. So here, providing more information wrt. the status of the provide operation would make sense (e.g with a chan). Applications could monitor the number of provider records that have been pinned and decide when to return by themselves.

3 I don't think this is a good approach, because the responsiveness of a peer depends on the vantage point. The provider records must be stored at the very least on the X closest peers to the CID location (even if X<20), otherwise it may not be discoverable in the DHT (at least with using the current lookup implementation). So I don't think that optimistic provide should stop after allocating 20, 22, or 25 provider records, but once the node cannot find new remote peers that are closer to CID. It means that the request will certainly time out, but it isn't an issue, as applications that are time sensitive with the provide operation shouldn't need to wait for the end of the provide operation.

@dennis-tra
Copy link
Contributor Author

Yes, to all three of your remarks 👍

I think number 2 could be a follow-up where we give the application layer more control over the provide process. Until now, the goal was to keep API compatibility by reusing the exposed Provide method. If we have a reasonable network size, we use Optimistic Provide, otherwise, we use the classic approach. Giving the user more flexibility would be something to consider, but I don't think this prohibits moving forward with the current state of things in this PR :)

Okay if you and @guillaumemichel are on board with this

I'm on board 👍

lookup_optim.go Outdated Show resolved Hide resolved
lookup_optim.go Outdated
// calculate distance of peer p to the target key
distances[i] = netsize.NormedDistance(p, os.ksKey)

// Check if we have already interacted with that peer
Copy link
Contributor

@guseggert guseggert Mar 19, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the wording can be more precise here, we haven't necessarily interacted with the peer when it's in peerStates, it just means that we've scheduled interaction with the peer. (There are some other comments with this same ambiguity.)

I can't find any instances of this ambiguity resulting in an actual race, but might as well clear up the comments so that one isn't accidentally added in the future.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to understand you correctly. Do you mean the brief moment of time when we have stored e.g. Sent into peerStates until the corresponding goroutine putProviderRecord was scheduled? This would indeed be an instance where we haven't interacted with the peer but just scheduled interaction 👍

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah that's all I meant, it's definitely a nit but just trying to be precise

Copy link
Contributor

@guillaumemichel guillaumemichel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that optimistic provide should not replace the current provide operation.

I think that the Provide function is mainly used to reprovide content (in addition to Providing it for the first time), and thus isn't time sensitive.

IIUC the main use case of optimistic provide, is to enable the Provide function to return faster to the user, i.e for an instant file sharing app where A pins a file to the DHT and B immediately fetches it. The trade off here is to sacrifice some accuracy against speed, which is totally reasonable in this use case. However, I don't think that kubo would benefit from it, because we don't want to sacrifice accuracy for speed for non time sensitive operations (e.g reprovide).

I recognize the value that optimistic provide brings to IPFS, I think that it should not replace, but rather extend the DHT interface, but exposing a new OptimisticProvide in addition to the already existing Provide operation. This would allow files being provided with OptimisticProvide to be reprovided using the more accurate Provide operation. And it allows applications that benefiting from OptimisticProvide to use it in a more controlled way.

@dennis-tra
Copy link
Contributor Author

dennis-tra commented Mar 20, 2023

I recognize the value that optimistic provide brings to IPFS, I think that it should not replace, but rather extend the DHT interface, but exposing a new OptimisticProvide in addition to the already existing Provide operation. This would allow files being provided with OptimisticProvide to be reprovided using the more accurate Provide operation. And it allows applications that benefiting from OptimisticProvide to use it in a more controlled way.

This is how I had it before the first round of feedback. This is the relevant comment. From that comment:

I do not think we are gonna start having provides where some of them can be done optimistically and other where we really want them to be precise.

I think you just gave the perfect example @guillaumemichel 👍 so, I'd also be in favour of having two separate methods or alternatively some way of letting the application signal what it wants. The latter would also result in a second exposed method or be a breaking API change.

(there are also several places in libp2p where behaviour is changed down the stack by passing a value in the context down the stack - I'm not a fan of that).

@guillaumemichel
Copy link
Contributor

I'd also be in favour of having two separate methods or alternatively some way of letting the application signal what it wants

I fully agree! The new method could expose additional information (such as the number of provider records pinned so far).

@guseggert
Copy link
Contributor

guseggert commented Mar 20, 2023

If the UX is not going to be "the same but with these different provide tradeoffs", but instead is going to be "new APIs, mix-and-match optimistic provide and traditional provide", then I think we should do a lot more due diligence about what the actual UX will be before we merge this, since that should drive the design here.

I'm concerned that this is scope creep. My preference would be to ship what we have now as experimental, collect some feedback, and then continue to iterate on it until we think it meets the bar for "non-experimental". This can include additional UX conversations / design.

@dennis-tra
Copy link
Contributor Author

I'm also quite keen to get this PR landed sooner than later because it's been sitting around for quite some time already. So, I'm totally up for postponing this discussion in favour of making some progress and gathering feedback. I also think that the current PR is minimally interfacing with the existing code, so changing it later should be quite easy. Flagging it as experimental signals that we're not making any promises regarding this feature - so it should be fine to change this behaviour later.

@yiannisbot
Copy link

I think there's consensus here to move forward with an experimental release and I'm in favour of doing so. A couple of thoughts:

  • If we leave it as an option to the user/application (and not have it on by default), perhaps we'd want to call it fast-provide, which is much clearer to understand (?)
  • It's not impossible for users to care about latency on re-provides, e.g., when someone is pushing an update to a website and want to see the updated version, but I understand it's not going to be the majority of cases.

Without trying to keep things back, I think it is useful to define what are the criteria to move this out of the experimental release and into the standard one. If that's already discussed somewhere (above or elsewhere), I'd appreciate a pointer :)

@guillaumemichel
Copy link
Contributor

It's not impossible for users to care about latency on re-provides, e.g., when someone is pushing an update to a website and want to see the updated version, but I understand it's not going to be the majority of cases.

When someone is pushing an update to a website, they provide a new CID, hence it is a provide not a reprovide. Please correct me if I am wrong, IPNS records are published using the Put method, and not the Provide. The reprovide operation only consists in making sure that some existing content doesn't disappear from the network, and IMO is never time sensitive.

I am not convinced that many applications would (and should) fully replace the Provide operation with the OptimisticProvide (or FastProvide), because some accuracy is lost. If we want this feature to be usable, IMO the best option is to extend the API with a new FastProvide function returning a channel of how many provider records have been pushed to the network so far. It would allow each application to define when to return to the user, and it would let the applications use the normal reprovide. Anyway, this isn't possible with the current Provide interface, so we have to extend the API to add new features

@dennis-tra dennis-tra force-pushed the optprov branch 2 times, most recently from 617a6ea to 9cb79d1 Compare March 31, 2023 14:42
@dennis-tra
Copy link
Contributor Author

Smoke test covers:

image

@dennis-tra
Copy link
Contributor Author

dennis-tra commented Apr 3, 2023

I have deployed the latest commit on our DHT lookup measurement infrastructure. These are the numbers (source):

p50

Classic Optimsitic

p90

Classic Optimsitic

Sample Size

Optimistic: ~875
Classic: ~3.3k

(I just deployed the optimistic version today, hence fewer provides)


Both approaches have 100% retrieval success rates


What the graphs don't show:

  • at how many peers were the provider records stored with either approach
  • how close were the peers that hold the provider record

Network Size Estimations

image

@BigLep BigLep mentioned this pull request Apr 4, 2023
lookup_optim.go Outdated Show resolved Hide resolved
@dennis-tra
Copy link
Contributor Author

@guseggert Incorporated feedback, and tests are running fine on my machine.

@guseggert guseggert merged commit 32fbe47 into libp2p:master Apr 5, 2023
@guillaumemichel
Copy link
Contributor

Go Checks / All is failing, and isn't part of the flaky tests.

Error: File is not gofmt-ed.

Error: ./lookup_optim_test.go:23:2: rand.Seed has been deprecated since Go 1.20 and an alternative has been available since Go 1.0: Programs that call Seed and then expect a specific sequence of results from the global random source (using functions such as Int) can be broken when a dependency changes how much it consumes from the global random source. To avoid such breakages, programs that need a specific result sequence should use NewRand(NewSource(seed)) to obtain a random generator that other packages cannot access. (SA1019)

@dennis-tra
Copy link
Contributor Author

dennis-tra commented Apr 5, 2023

Go Checks / All is failing, and isn't part of the flaky tests.

Thanks for pointing that out @guillaumemichel! I added the test file just recently and previously unrelated tests were failing. Here's a PR that tries to address both issues: #833

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants