-
Notifications
You must be signed in to change notification settings - Fork 406
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
More hints? #6
Comments
I won't say my solutions are idiomatic or even 100% correct, but you can find them at https://github.com/DavidWittman/go-concurrency-exercises/tree/solutions if you want to compare to your own. But 👍 to more hints in general. |
Why not have a solutions folder as well? That way it would be easier to discuss whether a solution is idiomatic or not. |
Hi @DavidWittman , it is nice to have your solutions, so i can cross check about my answer. But, something is strange on the exercise
so just want to mention, (correct me if i am wrong), by using your solution, race condition still happends. So, i found that by adding mutex on beginning of the Get func like this :
will solve the race condition |
@kennykarnama I haven't looked at this in a while, but I think I was only locking around the write to the cache, as indicated in the comment for that exercise:
My concurrency tests (with |
@kennykarnama your solution does solve the race condition in @DavidWittman's solution, but it also prevents all concurrency in that area of code. Having multiple thread-safe readers and non-thread-safe writer(s) is a common problem. You'll see the RWMutex supports this situation. Multiple threads can access areas protected by Rlock() & RUnlock(), but only one thread at a time can access an area protected by Lock() & Unlock(). To elaborate: when Lock() is called the current thread blocks until all threads that called Rlock() have called RUnlock() and any attempt to call Rlock() blocks until Unlock() is called. I'm just learning this myself, but I hope it helps! // Get gets the key from cache, loads it from the source if needed
func (k *KeyStoreCache) Get(key string) string {
k.rw.RLock()
val, ok := k.cache[key] // read is thread-safe
k.rw.RUnlock() // defer here would cause a deadlock on miss (waiting on self)
// Miss - load from database and save it in cache
if !ok {
k.rw.Lock()
defer k.rw.Unlock()
/* If multiple readers had requested the missing page each
one would attempt to load it leading to duplicates.
Thus we should check to see if the resource was already loaded. */
// catch baby swipers
val, ok := k.cache[key]
if ok {
return val
}
val = k.load(key)
k.cache[key] = val
k.pages.PushFront(key)
// if cache is full remove the least used item
if len(k.cache) > CacheSize {
delete(k.cache, k.pages.Back().Value.(string))
k.pages.Remove(k.pages.Back())
}
}
return val
} |
@ivanthewebber your solutions also has a race. Please correct me if I'm wrong.
Any time there is a "test-and-set", it probably warrants doing the test and set atomically. Some cases given the domain and your definition of correctness, you can get way with more granular locking. Just my 2 cents. Happy to discuss more. |
Hey,
First off, thanks a lot for this project. It's a great help for learning exactly those things that I'm interested in learning more about with go!
Now I'm new to the language and I don't know most of the good practices and sometimes not even the basic concurrency primitives. I've gone through the first three problems and have some solutions to them in my own fork, but I don't know if the solutions I come up with are idiomatic, or even decent. It would be great if there was a way to find a hint for solutions, that was somehow hidden away from the main readme. Maybe something as basic as a
spoiler.md
file that would contain something like: "you can do this with three lines of code using a Mutex. Btw, you don't need to put the mutex around the whole function, just lines 11-13 works".What do you think about including some sort of hints to help newbies know that they're on the right path and not just hacking poorly-thought solutions?
The text was updated successfully, but these errors were encountered: