Security Incident (June 12 2016)
On Sunday, June 12th, we were alerted to a vulnerability in our backup infrastructure. We took aggressive measures in the event that any of this data had been leaked, but in the end concluded that no data had been accessed. Because of this, we're no longer requiring users rotate API tokens or change their passwords.
On June 12th at 10:13 PDT we received a security report suggesting one of our S3 buckets had incorrect permissions, in which a user could access and modify files. At 10:20 we confirmed the issue, and immediately revoked all permissions and credentials for the affected bucket as we began our investigation. At 10:25 we finished reviewing other buckets and confirmed this was an isolated case.
We identified the affected bucket as being used to store partial database backups, such as those with our user accounts and API tokens. At this time we began working on an action plan to ensure that if data were leaked, we could minimize the damage done to our customers. In parallel, we started our investigation to see if there was any evidence of these backups being accessed.
At 10:47 based on the data stored in our backups, we began to rotate internal credentials for our SSO providers (Google and GitHub). This process was completed at 11:14, in which if you were accessing Sentry you would notice that you were required to re-authenticate with your provider as we immediately invalidated all existing linked identities.
Around this same time, we identified the core issue being the configuration used for our S3 ACLs. We had incorrectly specified two conflicting policies for the affected bucket, in which the more-correct policy was not being used. One of these policies allowed any authenticated user to access the bucket, whereas our other policies restricted which accounts have access to which buckets. Due to this, an attacker who knew the bucket name would have been able to access data as long as they had any authenticated account on AWS.
Due to user details being available in these backups, including bcrypt-hashed passwords, we began working on a solution to expire passwords for existing users. At 11:44 we completed an initial version of this feature and began doing QA passes on the process. This continued throughout the afternoon, and it was determined that due to the complex nature of brute forcing bcrypt and the lack of evidence of an intrusion we would work to polish up the process before forcing users to update their passwords.
At 12:10 we began contacting Sentry integration partners who might be affected so that they could take any action deemed necessary on their end. This included Atlassian, who was most affected by our JIRA plugin implementation which stores passwords in plaintext as part of integration configuration.
Although we had already locked down the existing S3 buckets, we needed to address two problems: the invalid ACLs and guessable bucket names. At 12:40 we completed moving to more secure (less guessable) bucket names and confirmed new ACLs matched the behavior we were originally expecting.
For the remainder of June 12th we focused on improving the password expiration flow, analyzing the data which was exposed, and looking for evidence of any access to the data. Additionally we began drafting up an action plan for users which we intended to send out the morning of June 13th.
The morning of June 13th we finalized the password expiration flow and the initial email to users. At 15:12 we forcefully expired all pre-existing passwords and began contacting all customers who might be affected. We finished emailing those customers at 15:31.
At 16:49 we pushed out code that pre-encrypts our backups prior to sending them to S3 and are working towards transitioning our existing backups to the same. From here we’ve eliminated the primary concerns exposed in this incident. As the day wrapped up we focused on responding to our customers.
On June 14th at 12:12 PDT we finished our investigation and confirmed that no database backups had been accessed by a third party. Because of this, we will not forcefully be expiring existing API keys. We have also removed the forced expiration flag on any account which was pending.
Looking forward we have a few projects that are either already in progress, or we’re looking at kicking off soon:
Multi-Factor Auth is something that has been in production for the Sentry team for a few weeks, and we’re nearing a public release.
Things like API tokens and integration settings could be better served if we encrypted them in the database. This would at least require access to our keys in addition to the data itself. We’ll be moving forward with encryption strategies here in the short term.
Additionally services like JIRA support OAuth-based flows which provide a stronger level of security. We’re going to work towards updating our integrations to ensure we use this approach where possible.
Our password expiration flow was extremely rushed, and we’re going to polish that up.
In addition to password expiration, we’re going to work on additional security controls to ensure users of Sentry are utilizing best practices with things like password length and automated expiration.
We’re working towards a policy around either automated or manual review of AWS credentials and will be using these considerations in how we approach third party infrastructure in the future.
Lastly we're going to be formalizing an official bounty program via HackerOne. We'll be launching this in the next couple of weeks.
This was the first major security incident for Sentry, and fortunately it’s been a lot less impactful than it could have been. We aggressively asked users to rotate credentials in a variety of places, and forcefully in others. It’s been a valuable experience for our product team, albeit one we wish we could have avoided.