FIX: S3 uploads were missing a cache-control header (PR #7902)

GitHub

You’ve signed the CLA, xfalcox. Thank you! This pull request is ready for review.

Errr do we want this on EVERY SINGLE upload? That seems unwise?

On Wed, Jul 17, 2019 at 12:57 PM discoursebot notifications@github.com wrote:

You’ve signed the CLA, xfalcox. Thank you! This pull request is ready for review.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/discourse/discourse/pull/7902?email_source=notifications&email_token=AALTWVIIHJDIK2JIRJMTVRLP752SZA5CNFSM4IET6ZOKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD2GNEQI#issuecomment-512545345, or mute the thread https://github.com/notifications/unsubscribe-auth/AALTWVOXE54QCOYB4IBOBEDP752SZANCNFSM4IET6ZOA .

Are we going to need a find_each for this? I don’t want us to run out of memory when there are many many uploads / images.

The pluck usage makes it be an array of [int, string, int] tuples, instead of the full blow objects, which is the same approach used by the existing rake task at

(this task is basically a copy of that one using different S3 API calls)

If you feel strongly about this I can try and extract the common bits into a method and process it on batches.

Let’s try living with it for now, and revisit as batches later.