Deploying on Fridays has become something of a meme. If on any particular Friday you tweet a variation of "It's Friday, don't deploy!" or "It's Friday, and I'm deploying as usual!" you're almost guaranteed to have a battle in your mentions.
The problem is that while not deploying on a Friday is the reality for some organizations, it means that deploys are high risk. After all, if deploying wasn't risky why would you ban Fridays deploys?
If you're trying to
out compete other companies
increase the reliability of deploys
Many services include "Getting Started" tutorials that tell you to download their CLI, build your site locally, and then use the CLI to deploy your site. Potential problems you may run into with this approach include:
Consider: You build and deploy your site with a globally installed Hugo binary (or next, 11ty, snowpack, etc).
Next week you have to deploy again, except your operating system's latest major version came out last year and you finally decided to upgrade. This caused you to run brew upgrade
(or similar) and you upgraded a bunch of dependencies. The new version of <preferred build tool>
you installed contains a subtle bug that results in a missing html file (or json, js, etc).
You build and deploy your site via the CLI as usual, no problem. A little while later a user visits your site and sees the 404 for the relevant file, preventing them from reaching the information they want (or maybe stopping them from completing signup at all!). They contact you to tell you they couldn't do what they were trying to do. you're now faced with the question of why it isn't working anymore.
Because you've deployed manually, you don't have a copy of the old site, so you can't just revert the change and have to instead fix the problem, then rebuild and redeploy.