## Follow Algorithms | Reddit on Feedspot

or

Algorithms | Reddit by /u/sh_tomer - 2h ago

Assuming I have statements of equality of variables as an input to the algorithm, for example:

a = b

a = c

I would like to be able to list all the possible paths / orders of equality, as for this example:

a => b => c (we can reach b from a, and reach c from b)

a => c => b

b => c => a

and so on...

Assuming I'll use an undirected graph in this case, while traversing the graph, I won't be able to conclude that b = c, as it's an implicit equality, so I won't be able to list all possible paths.

Any other algorithm / data structure that may help me with both representing these types of connections, while also traversing all possible paths while looking also at implicit equalities?

Thank you for your time and efforts!

submitted by /u/sh_tomer
• Show original
• .
• Share
• .
• Favorite
• .
• Email
• .
Algorithms | Reddit by /u/chess_dogg - 2h ago

Hi, my friend and I invented this card game called RedBlack with the following rules:

1. A deck of cards is shuffled and split evenly between all players.
2. Players hold their decks face-down and do not look at their cards.
3. The player to the left of the dealer starts the game by playing the card on the top of their pile.
4. Continuing clockwise, players must play their top card on the pile until a card of a different colour to the first card is played.
5. The player who plays the card of a different colour must then pick up the played cards and place them on the bottom of their pile (without shuffling them). Play to the left of the player that picks up.
6. You win the game if you get rid of all your cards.

It's a game the has absolutely no skill, but it's interesting just how long it takes for a game to end. After some investigation, I was able to find a few "infinite games", between two players that would be never-ending.For instance, consider this game with 5 cards:

`Start: P1 has `RB`, P2 has `RRB` – P1 to go First turn. P1 plays `R`, P2 plays `R`, P1 plays `B` and picks up. Now, P1 has `RRB`, P2 has `RB` – P2 to go Second turn. P2 plays `R`, P1 plays `R`, P2 plays `B` and picks up. Now, P1 has `RB`, P2 has `RRB` – P1 to go `

This is the same as it was when the cards were first dealt, so if played with no mistakes, this game would continue forever!

What I was wondering was whether there was an infinite game for a standard deck of cards (26 red, 26 black)? Is there a good algorithm I could use to find this out? So far I've created a programme in x64 assembly, however my algorithm basically just plays through each game, so increases exponentially with any extra cards. From my calculations it would take around 25 years to play through all games with a full deck!

Thanks for any help or advice!

submitted by /u/Chess_Dogg
• Show original
• .
• Share
• .
• Favorite
• .
• Email
• .
Algorithms | Reddit by /u/cosmologicon - 2d ago

I'm optimizing my Algorithm X implementation for solving exact cover problems, and I'm looking for realistic examples to add to my profiling so I can make the correct optimizations. So far I've got:

• Sudoku (9x9) and hexadoku (16x16)
• Dominos tiling a 6x6 chessboard
• 10x10 N-queens problem

I know exact cover pops up in a surprising number of places, but I'm having trouble finding more good examples. Any suggestions?

submitted by /u/Cosmologicon
• Show original
• .
• Share
• .
• Favorite
• .
• Email
• .
Algorithms | Reddit by /u/romanjose - 2d ago
• Show original
• .
• Share
• .
• Favorite
• .
• Email
• .
Algorithms | Reddit by /u/dumplingaficionado - 3d ago

Why is this for loop considered to execute n+1 times?

https://imgur.com/a/ywLz8jB

Say n=5 -> 0,1,2,3,4... isn't that n times?

• Show original
• .
• Share
• .
• Favorite
• .
• Email
• .
Algorithms | Reddit by /u/_sigils - 4d ago

When inserting strings into a hash table, the assumptation is based on hashing h(x), where x is the inserted string. The runtime is therefore O(|x|) for hashing x, and |x| is the length of x. Each lookup in the hash table is O(1), and the inserting is O(|x|) for each insertion. However, if the table exceeds it's treshhold it has to be doubled in size and every element has to be rehashed again into new table. If assumed the hashing is constant, O(1) the amortized cost is also O(1).

But what about when the hashing is O(|x|)? Is the amortized cost also O(|x|)?

submitted by /u/_sigils
• Show original
• .
• Share
• .
• Favorite
• .
• Email
• .
Algorithms | Reddit by /u/ashishkhuraishy - 4d ago
• Show original
• .
• Share
• .
• Favorite
• .
• Email
• .
Algorithms | Reddit by /u/longtursk - 4d ago
• Show original
• .
• Share
• .
• Favorite
• .
• Email
• .
Algorithms | Reddit by /u/i_am_a_regular_guy - 4d ago

Hey everyone,

I'm trying to figure out an example instance for both the best case O(nlogn) and worst case O(w*n) for radix sort. But I'm unclear on what the logn is measuring with this algorithm. Would best case be an instance where the number of digits for each element is the log base 10 of the number of elements (10 elements of 1 digit each)? Or is it when the number of digits for each element is the log base 10 of the number of possible elements based on combinations given the number of digits (ex. 10 elements of 2 digits (00-99))? Would worst case simply be an instance where none of the elements fit this description? Thanks in advance.

EDIT: Accidently had the cases reversed.

submitted by /u/I_am_a_regular_guy
• Show original
• .
• Share
• .
• Favorite
• .
• Email
• .
Algorithms | Reddit by /u/karljay001 - 1w ago

Watched a few videos on Bloom filters and it seems like it's just an index for a hash table with no data.

When you have a hash table, you hash and store the data based on the hash... the Bloom filter does the hashing and marks the hash as being used or not used.

So if this were a database with an index, it would look up in the database's index to see if a record exists or not.

So if this is just a "hash the data and see if the hash is used or not", why would this be any better than just hashing and storing the data? When you do a look up in the Bloom filter, you're going thru the hashing process, so you'd do the same work, just without actually storing the data.

So the advantage would be what, if you have a huge amount of data, you simply get to find out if a data element is there or not?

What's a practical use of this?

https://llimllib.github.io/bloomfilter-tutorial/

submitted by /u/KarlJay001

Articles marked as Favorite are saved for later viewing.
• Show original
• .
• Share
• .
• Favorite
• .
• Email
• .