FIX: Ensure id sequences are not reset during db:migrate (PR #14184)

The seed-fu gem resets the sequence on all the tables it touches. In some situations, this can cause primary keys to be re-used. This commit introduces a freedom patch which ensures seed-fu only touches the sequence when it is less than the id of one of the seeded records.

GitHub

Looks good to me :+1:

Is this going to run all the seeds and make the test slow?

Should we also introduce a migration to fix the affected tables?

Yes it will run all the seeds. But we already do this in a number of different specs. It’s very fast:

    puts Benchmark.measure { SeedFu.seed }
  0.041236   0.002658   0.043894 (  0.052766)

Should we also introduce a migration to fix the affected tables?

What would the migration do? We don’t have any way of knowing what the sequence value should be. This fix will just stop the issue happening again in future.

Wouldn’t the latest sequence + 1 work?

Hmm I’m surprised it is that fast. I guess seeds are already run during the initial migration of the test database so this call is like a noop?

What if two users (or more) have been deleted since the last SeedFu run? Then we would need to bump by 2 (or more). The vast majority of sites are probably unaffected by this, so most wouldn’t need to be bumped at all.

Sorry under what scenario would a site be affected? I personally think that resetting the sequence is a better state than leaving the current broken sites broken.

Sorry I meant fixing the sequence not resetting the sequence.

Here is a local repro:

❯ rails runner 'u = User.create!(username: SecureRandom.hex(10), email: "#{SecureRandom.hex}@example.com"); puts u.id; UserDestroyer.new(Discourse.system_user).destroy(u);'
63
❯ rails runner 'u = User.create!(username: SecureRandom.hex(10), email: "#{SecureRandom.hex}@example.com"); puts u.id; UserDestroyer.new(Discourse.system_user).destroy(u);'
64
❯ rails runner 'u = User.create!(username: SecureRandom.hex(10), email: "#{SecureRandom.hex}@example.com"); puts u.id; UserDestroyer.new(Discourse.system_user).destroy(u);'
65
❯ rake db:migrate
❯ rails runner 'u = User.create!(username: SecureRandom.hex(10), email: "#{SecureRandom.hex}@example.com"); puts u.id; UserDestroyer.new(Discourse.system_user).destroy(u);'
63

So sites are affected when these things happen in sequence:

  1. They delete one or more users which have the “highest” id in the table
  2. Before creating any other users, db:migrate is run

The result is that the sequence is ‘reset’ to the current highest ID. Effectively, the sequence is decremented by the count of ‘highest id’ users that were deleted.

For most ‘seeded’ tables (e.g. Badges, etc.) I don’t think we have any way to work out the ID of deleted items. So there is literally no way for us to fix this.

For Users, we might be able to hack something together by checking the staff action log entries (and looking for user_histories.target_user_id > max(users.id)). But I’m not sure it’s worth the risk/effort - as far as I can tell, this bug has existed since the beginning of Discourse. It’s very unlikely to be hit, and even if it is, re-use of ids is generally not a massive issue.

I’m still don’t fully understand the repro but if the sequence is reset each tome Db:migrate is called, the sequence will keep resetting wouldn’t it?

I think we should try and avoid having some instance

Sorry let’s merge this fix in first, I’ll expand my thoughts tomorrow in detail on dev once I confirm a few things locally.