Hi, my friend and I invented this card game called RedBlack with the following rules:
A deck of cards is shuffled and split evenly between all players.
Players hold their decks face-down and do not look at their cards.
The player to the left of the dealer starts the game by playing the card on the top of their pile.
Continuing clockwise, players must play their top card on the pile until a card of a different colour to the first card is played.
The player who plays the card of a different colour must then pick up the played cards and place them on the bottom of their pile (without shuffling them). Play to the left of the player that picks up.
You win the game if you get rid of all your cards.
It's a game the has absolutely no skill, but it's interesting just how long it takes for a game to end. After some investigation, I was able to find a few "infinite games", between two players that would be never-ending.For instance, consider this game with 5 cards:
Start: P1 has `RB`, P2 has `RRB` – P1 to go First turn. P1 plays `R`, P2 plays `R`, P1 plays `B` and picks up. Now, P1 has `RRB`, P2 has `RB` – P2 to go Second turn. P2 plays `R`, P1 plays `R`, P2 plays `B` and picks up. Now, P1 has `RB`, P2 has `RRB` – P1 to go
This is the same as it was when the cards were first dealt, so if played with no mistakes, this game would continue forever!
What I was wondering was whether there was an infinite game for a standard deck of cards (26 red, 26 black)? Is there a good algorithm I could use to find this out? So far I've created a programme in x64 assembly, however my algorithm basically just plays through each game, so increases exponentially with any extra cards. From my calculations it would take around 25 years to play through all games with a full deck!
When inserting strings into a hash table, the assumptation is based on hashing h(x), where x is the inserted string. The runtime is therefore O(|x|) for hashing x, and |x| is the length of x. Each lookup in the hash table is O(1), and the inserting is O(|x|) for each insertion. However, if the table exceeds it's treshhold it has to be doubled in size and every element has to be rehashed again into new table. If assumed the hashing is constant, O(1) the amortized cost is also O(1).
But what about when the hashing is O(|x|)? Is the amortized cost also O(|x|)?
I'm trying to figure out an example instance for both the best case O(nlogn) and worst case O(w*n) for radix sort. But I'm unclear on what the logn is measuring with this algorithm. Would best case be an instance where the number of digits for each element is the log base 10 of the number of elements (10 elements of 1 digit each)? Or is it when the number of digits for each element is the log base 10 of the number of possible elements based on combinations given the number of digits (ex. 10 elements of 2 digits (00-99))? Would worst case simply be an instance where none of the elements fit this description? Thanks in advance.
Watched a few videos on Bloom filters and it seems like it's just an index for a hash table with no data.
When you have a hash table, you hash and store the data based on the hash... the Bloom filter does the hashing and marks the hash as being used or not used.
So if this were a database with an index, it would look up in the database's index to see if a record exists or not.
So if this is just a "hash the data and see if the hash is used or not", why would this be any better than just hashing and storing the data? When you do a look up in the Bloom filter, you're going thru the hashing process, so you'd do the same work, just without actually storing the data.
So the advantage would be what, if you have a huge amount of data, you simply get to find out if a data element is there or not?