Reading post-mortems for fun and education: On January 31st 2017, we experienced a major service outage for one of our products, the online service GitLab.com. The outage was caused by an accidental removal of data from our primary database server.
The Site Reliability Engineering book is available online. A lot of it doesn’t scale well to small operations but there’s a lot of good tops and lessons learned in there.
Enterprise Security Weekly Quick Guide To Building A Successful Incident Response Program with Paul and John. (Full show notes here)
I’ll throw an allegedly in here; Pastebin has a story written by the fellow who hacked Hacking Team about how it was accomplished. Lessons learned are, again: Change default passwords Patch your systems Log account and network activity – identify suspicious activity Secure your backups After sending passwords by email delete the email and change […]
An infected laptop was used to access the systems at the Pentagon’s credit union, exposing the financial records of the members of the United States military, according to a Kaspersky Lab report. […] This isn’t the first time PenFed has been targeted. The credit union posted an alert on its Web site notifying users that […]
xkcd: Password Reuse. You can get the same functionality out of Troy Hunt’s https://haveibeenpwned.com/
Watch those faillogs! “The use of stolen credentials was the number one hacking type in both the Verizon and USSS datasets, which is pretty amazing when you think about it.”“We’ve observed companies that were hell-bent on getting patch x deployed by week’s end but hadn’t even glanced at their log files in months.” [given that […]