PERF: destroy users in batches of 50 users

PERF: destroy users in batches of 50 users

This speeds up the process as we no longer need to commit a transaction per user deleted

diff --git a/app/jobs/scheduled/clean_up_inactive_users.rb b/app/jobs/scheduled/clean_up_inactive_users.rb
index 25d3f3b..bc2a306 100644
--- a/app/jobs/scheduled/clean_up_inactive_users.rb
+++ b/app/jobs/scheduled/clean_up_inactive_users.rb
@@ -6,24 +6,37 @@ module Jobs
     def execute(args)
       return if SiteSetting.clean_up_inactive_users_after_days <= 0
 
-      destroyer = UserDestroyer.new(Discourse.system_user)
-
       User.joins("LEFT JOIN posts ON posts.user_id = users.id")
         .where(last_posted_at: nil, trust_level: TrustLevel.levels[:newuser], admin: false)
         .where(
           "posts.user_id IS NULL AND users.last_seen_at < ?",
           SiteSetting.clean_up_inactive_users_after_days.days.ago
-        )
+      )
         .limit(1000)
-        .pluck(:id).each do |id|
-        begin
-          user = User.find(id)
-          destroyer.destroy(user, context: I18n.t("user.destroy_reasons.inactive_user"))
-        rescue => e
-          Discourse.handle_job_exception(e,
-            message: "Cleaning up inactive users",
-            extra: { user_id: id }
-          )
+        .pluck(:id).each_slice(50) do |slice|
+        destroy(slice)
+      end
+
+    end
+
+    private
+
+    def destroy(ids)
+
+      destroyer = UserDestroyer.new(Discourse.system_user)
+
+      User.transaction do
+        ids.each do |id|
+          begin
+            user = User.find(id)
+            destroyer.destroy(user, transaction: false, context: I18n.t("user.destroy_reasons.inactive_user"))
+          rescue => e
+            Discourse.handle_job_exception(e,
+             message: "Cleaning up inactive users",
+             extra: { user_id: id }
+            )
+            raise e
+          end
         end
       end
     end

GitHub sha: 8fc2b012

1 Like

There is a minor behavior change here, if a single user fails to delete we will quit the job, I think it is a fair compromise cause the larger transaction saves time and partial rewinds on fail seem a bit overkill here

1 Like