Challengeprobability based selection
Here is an interesting problem that I just run into. I need to select a value from a (small) set based on percentage. That seems like it would be simple, but for some reason I can’t figure out an elegant way of doing this.
Here is my current solution:
var chances = new Page[100]; int index = 0; foreach (var page in pages) { for (int i = index; i < index + page.PercentageToShow; i++) { chances[i] = row; } index += page.PercentageToShow; } return chances[new Random().Next(0, 100)];
This satisfy the requirement, but it is… not as elegant as I would wish it to be.
I may have N number of values, for small N. There isn’t any limitation on the percentage allocation, so we may have (50%, 10%, 12%, 28%). We are assured that the numbers will always match to a 100.
More posts in "Challenge" series:
- (01 Jul 2024) Efficient snapshotable state
- (13 Oct 2023) Fastest node selection metastable error state–answer
- (12 Oct 2023) Fastest node selection metastable error state
- (19 Sep 2023) Spot the bug
- (04 Jan 2023) what does this code print?
- (14 Dec 2022) What does this code print?
- (01 Jul 2022) Find the stack smash bug… – answer
- (30 Jun 2022) Find the stack smash bug…
- (03 Jun 2022) Spot the data corruption
- (06 May 2022) Spot the optimization–solution
- (05 May 2022) Spot the optimization
- (06 Apr 2022) Why is this code broken?
- (16 Dec 2021) Find the slow down–answer
- (15 Dec 2021) Find the slow down
- (03 Nov 2021) The code review bug that gives me nightmares–The fix
- (02 Nov 2021) The code review bug that gives me nightmares–the issue
- (01 Nov 2021) The code review bug that gives me nightmares
- (16 Jun 2021) Detecting livelihood in a distributed cluster
- (21 Apr 2020) Generate matching shard id–answer
- (20 Apr 2020) Generate matching shard id
- (02 Jan 2020) Spot the bug in the stream
- (28 Sep 2018) The loop that leaks–Answer
- (27 Sep 2018) The loop that leaks
- (03 Apr 2018) The invisible concurrency bug–Answer
- (02 Apr 2018) The invisible concurrency bug
- (31 Jan 2018) Find the bug in the fix–answer
- (30 Jan 2018) Find the bug in the fix
- (19 Jan 2017) What does this code do?
- (26 Jul 2016) The race condition in the TCP stack, answer
- (25 Jul 2016) The race condition in the TCP stack
- (28 Apr 2015) What is the meaning of this change?
- (26 Sep 2013) Spot the bug
- (27 May 2013) The problem of locking down tasks…
- (17 Oct 2011) Minimum number of round trips
- (23 Aug 2011) Recent Comments with Future Posts
- (02 Aug 2011) Modifying execution approaches
- (29 Apr 2011) Stop the leaks
- (23 Dec 2010) This code should never hit production
- (17 Dec 2010) Your own ThreadLocal
- (03 Dec 2010) Querying relative information with RavenDB
- (29 Jun 2010) Find the bug
- (23 Jun 2010) Dynamically dynamic
- (28 Apr 2010) What killed the application?
- (19 Mar 2010) What does this code do?
- (04 Mar 2010) Robust enumeration over external code
- (16 Feb 2010) Premature optimization, and all of that…
- (12 Feb 2010) Efficient querying
- (10 Feb 2010) Find the resource leak
- (21 Oct 2009) Can you spot the bug?
- (18 Oct 2009) Why is this wrong?
- (17 Oct 2009) Write the check in comment
- (15 Sep 2009) NH Prof Exporting Reports
- (02 Sep 2009) The lazy loaded inheritance many to one association OR/M conundrum
- (01 Sep 2009) Why isn’t select broken?
- (06 Aug 2009) Find the bug fixes
- (26 May 2009) Find the bug
- (14 May 2009) multi threaded test failure
- (11 May 2009) The regex that doesn’t match
- (24 Mar 2009) probability based selection
- (13 Mar 2009) C# Rewriting
- (18 Feb 2009) write a self extracting program
- (04 Sep 2008) Don't stop with the first DSL abstraction
- (02 Aug 2008) What is the problem?
- (28 Jul 2008) What does this code do?
- (26 Jul 2008) Find the bug fix
- (05 Jul 2008) Find the deadlock
- (03 Jul 2008) Find the bug
- (02 Jul 2008) What is wrong with this code
- (05 Jun 2008) why did the tests fail?
- (27 May 2008) Striving for better syntax
- (13 Apr 2008) calling generics without the generic type
- (12 Apr 2008) The directory tree
- (24 Mar 2008) Find the version
- (21 Jan 2008) Strongly typing weakly typed code
- (28 Jun 2007) Windsor Null Object Dependency Facility
Comments
Look at roulette wheel selection for genetic algorithms. But I haven't seen any significantly better implementation.
Maybe I'm missing the something, but aren't you just finding the page that has a percentage range (offset by the sum of previous percentages) in which a random number falls? If so, you should be able to remove at least one loop.
how about:
var pagesperc = new Dictionary <page,> ();
int total = 0;
foreach(var page in pages)
{
}
int rnd = new Random().Next(0, 100);
foreach(var kv in pagesperc)
{
}
Sorry, comment messed up.
The dictionary should be generic {Page, int}
@Remco at that point, why don't you just put the random number selection before the first loop and do away with the second loop entirely?
I don't understand how this is suppose to work.
Let us assume that you have 50% / 50%.
And random returns 7.
Maybe the one loop approach (you'd have to use a different first loop then Remco) isn't more elegant, just thought you might be looking at the problem backwards.
I'm not sure about the language, but wouldn't this work?
int index = new Random().Next(0, 100);
foreach (var page in pages)
{
if (index < page.PercentageToShow) return page;
index -= page.PercentageToShow;
}
?
@Ayende
sorry, didn't test it.
This should work:
replace the last loop with:
Importantly
1: It's fast enough
2: It's very easy to understand
3: It works
So I wouldn't change it.
int rand = new Random().Next(0, 100);
int percentSoFar = 0;
foreach (var page in pages)
{
}
// error?
@Matt
that works too ! nice one.
You could iterate through the pages and for each page generate a boolean value from the weighted probability of that page being selected (the 'percent' of the page over the 'percent' of all remaining pages including the page). If that value is true, return the page.
Eg If you get to the last page, the probability will be 1.0 that you return that page.
Matt, I tried that too but with 2 pages (10% and 20%) I didn't get anything near a 2 to 1 ratio, more of an 8 to 1.
Pseudocode:
percent = 100.0
for each page in pages {
if testprobability(page.percent / percent)
percent -= page.percent
}
bool testprobability(probability) {
// return true if random between 0 and 1.0 is less than probability
}
sample:
page1 20% probability == 20/100
page2 50% probability == 50/80
page3 30% probability == 30/30
@Peter - 10 + 20 != 100.
"We are assured that the numbers will always match to a 100."
This reminds me of my probability vectors when playing with Markov Chains. There are several ways, when you don't have a lot of items in your array, best way is to use some kind of sparse array
int rnd = new Random().Next(0, 100);
pages.SkipWhile(p => p.PercentageToShow < limit).Take(1);
Note: percentage to show is actually a weighted percentage ( p.Percentage + every p.Percentage less than that. If you want to stick to absolute percentage, you have to massage your collection first, with a loop, a aggregate of some sort and a sort)
Brian, 100 is irrelevant, the code is functionally identical whether they add up to 100, 30, or 99. The ratio should still be 2 to 1.
(continued)
read rnd instead of limit
int rnd = new Random().Next(0, 100);
pages.SkipWhile(p => p.PercentageToShow < rnd).Take(1);
your weights must be ordered ascending, and it must be weights, not absolute pc. Going from absolutes to weighted percentages is trivial and could be done ahead of time, once and for all.
I have had a similar issue. There was a dictionary of elements (T) and there was a probability of choosing each element (double) [Dictionary <t,> Weights]:
// Make sure that summ of probabilities = 1.0
Normalize( );
// All weights are on single tape with length 1.0
// Tape is divided by regions whose length equals to their probability
// Revolve the rulette and stop somewhere on the tape
double rouletteStop = Random.NextDouble( );
// Search for element that we stopped at
T lastElement = default( T );
foreach( T key in Weights.Keys )
{
}
// We are at the end of the tape or there was rounding error
return lastElement;
I think it's because you are combining the weighted distribution with the selection of the next page.
I don't know if this is any better, but maybe something like:
<string
{
<weightedstring distributedStrings = new List <weightedstring (100);
<weightedstring weightedStrings = new List <weightedstring ();
(me again)
it's really useful for markov chains when you can have big probability differences between items, greater than 1%. Also, if you have a lot of items, you can implement a nice binary search to get to your value, but then you can kiss LINQ bye bye (because of its forward only streaming)
Subtext stripped out all the generic statements. Let's try again:
Aha! It was the RND that was the problem, it was returning the same value because it was being created each time. Making it static fixed it.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace ConsoleApplication45
{
<page();
<page pages)
}
Must...Test...First...
I meant to have the Add method clear the list before redistributing the elements to it. Sorry about that.
Sorry, that would be better as...
<page pages)
@Peter - Its probably a bit of a silly argument in this case, but the fact that it works for values that don't add to 100 is irrelevant. The spec specifically stated that they would sum to 100.
The first question is "Why are you rebuilding the chance[] array every time?"
If the answer is "Because the PercentageToShow values may change between calls", then you are better off with some variant of the code offered by Adam/Matt/Peter. They are O(N) versus your O(100) (where N must be <100, or the algorithm won't work).
However, if the answer is "I'm not. It just looks that way in the snippet", then you probably better off with what you are doing. It's O(1) with a presumably amortizable O(100) one-time set-up.
Matt's solution looks correct to me, and is the standard approach used in simulation. You can even optimize it a bit, by sorting your pages by decreasing probability: starting with the highest probability will likely terminate your loop earlier. Most likely irrelevant, given the size of the problem, though!
Matt's solution is pretty much what I had in mind, so +1 there.
Brian. Your original statement implied that my code was wrong because it doesn't assume the percentages add up to 100. My point is that it doesn't matter what the percentages add up to, my routine will work with the correct ratio anyway so there is no point in restricting it.
Besides, when a customer says "Always" what they actually mean is "Mostly", and when they say "Never" they mean "Hardly ever".
Comment preview