I have a lot of things I could say about this (for context, I just retired a month ago as chair of the AO3 support committee so I have insight, but I am speaking only on my own behalf and not as an official representative).
First: this particular type of data migration (moving from INT to BIG INT) has happened twice in AO3's past already (for history, and for kudos) and it was the same each time - a large table with hundreds of millions of items being moved is time-consuming and you need to be cautious doing it, and it WILL require a few hours of downtime to be safe. It is not "fixing a bug". It is moving the content of all bookmarks on the site. It will have to happen again for other types of data in the future, because AO3 just keeps growing.
Second: They can't know exactly how long a migration like that will take, so they make their best estimate. They can't test it before-hand on the test archive and have it reproduce exactly what will happen on the live archive, because the live archive has so much more data. Estimating "how long will it take to move 700 million items" is a matter of experience and luck. (Yes, 700 million is more the actual number of bookmarks involved, I know people keep citing the 2 billion number but each bookmark is assigned an ID number, and bookmark numbers iterate by 3. The highest ID number available under the old system was the slightly over 2.1 billion number, so the actual total is about 1/3 that. It's still a lot.)
Third: In general, AO3 tries not to have downtime, and it does very well. My observation is that the process in this case was usually like "Ok, let's start doing this and see how it goes. *waits to see if things start to get painfully slow and a lot of errors start happening* Ok, there are lots of errors happening and the site is getting slow to use, let's block all bot traffic and see if that helps. *that helps for a while and then things start to pile up again* Ok, let's try flipping the site into maintenance mode (i.e. take it down) just for a minute or two and see if that lets it catch up. *that helps for a bit and then things start to get slow and not catch up with just a few minutes of downtime* Ok, we do need to actually take it down and just let the process run until it finishes. " My point is that they first try various options that, if they work, will not require taking the site down all the way, before resorting to downtime. If they left it up, the process would take a lot longer, and people would experience a lot more problems while using the site. Taking it down is safer and more efficient in that kind of case.
Fourth: The same people doing the work need to communicate it to the people who do the tweets/tumblr posts/status updates. So not just doing complicated database stuff, but also telling communications "hey we are about to do X and Y and it will take approximately Z hours". (Fortunately we have mostly moved on from the times when it was literally the SAME people trying to do both things, and have dedicated communications folks to help.) Sometimes when they are trying to do a lot of things at once, the technical volunteers might not communicate as quickly or clearly as would be ideal, or a comms person might not immediately be available to make a post, so a status update is a bit later than we prefer, but we try to always communicate to users before doing something big like taking the site down. In a normal situation with planned downtime, the ideal is to make a post a couple of days before, then an hour before, and then at the time the downtime happens, and then when it's done. Sometimes there isn't time for as much advance warning, when something is either unexpected or doesn't go as planned.
Fifth: Also the people doing this are doing it in their spare time, around real jobs, and often on their evenings, weekends, in the middle of the night, while they're trying to cook dinner, etc. and they're doing it better than many sites with paid employees and larger staff. So cordially, anyone saying that they're lying or not doing a good enough job can go to hell.