Security Horror Story: Accidentally exposing PII data
2021年10月25日
0 分で読めますNothing beats a good horror story... especially not when you talk about software development and security. I mean, what could possibly go wrong when you develop software??? More importantly, what could possibly go wrong when you and your team are working on a small R&D type application and funding for this project is still pending...
You can imagine that working with immense pressure to deliver features at lightning speed to ensure that funding for the next period is a breeding ground for a disaster waiting to happen. Specifically, if there is no clear vision on the project’s end goal, and ideas change by the day.
Let me take you into my story, where things went wrong from a security point of view. Because this small R&D cowboy project was part of a larger institution, similar to a bank or an insurance company, there was more on the line than just some small project.
The project
The project started as a mobile app to look into real estate properties, like buildings and houses. Most of the system logic was server-side, so we built an excellent (micro) service-oriented solution in Java.
One of the services was the profile server. Every profile contained a randomly generated UUID and a list of preferences. One of the major features was that users can use the app anonymously. So we stored the UUID in the device’s local storage and used it to recover the profile from the server. In a nutshell, the service looked a bit like this.
At some point in time, the idea came up that a user should be able to claim a property in the system. This mainly happens if the user owns the house or building. The owner of the house could now enhance the property with pictures and a description. A house could only be claimed by one user.
This new feature meant two critical changes in the context of this story.
We created a new service called “MyHouse” service so a user can claim a house
The profile service needs to be enhanced. Now a user should be able to login and claim a house. Therefore we enhanced the existing profile service with the option to register.
The services looked a bit like below, where a MyHouse object was connected to a user profile containing the UUID and the profile now can contain an email address.
It is important to note that we are instructed to still support the anonymous use like before, and this feature should be on top of what we already had.
The problem
The new MyHouse service had an endpoint to list all the claimed properties. This exposed the complete MyHouse object in JSON, including the UUID of the profile. Because we had to support the previous functionality, it was still possible to find a profile by just having the UUID of the profile.
Long story short, the mobile frontend did not need the UUID at all. However, if you did a plain HTTP request and you have the MyHouse object, you can use the UUID in a second call to find the profile. From this point on people were able to connect the physical address of a property to an email address. Since email addresses in many cases consist of a firstname.lastname@provider.com
(or similar) we now had a data leak because you could connect a person to a physical address. Ooops, this was a leak of Personally identifiable information or PII data.
The fix and the aftermath
This was reported anonymously to the security department of the company. Some ethical person luckily took their responsibility to do a responsible disclosure. Once we understood the issue, the fix was a 5-minute exercise, including the push to productions. By adding a @JsonIgnore
annotation to the UUID field of the MyHouse POJO, we prevented the serialization of the field to JSON, and a connection between a profile and a physical address was no longer there.
From an engineering perspective, you might end this post here. However, although the fix was easy, the aftermath of such an incident is beyond intrusive. It started with a ton of questions about the incident:
Who was exposed?
How long was it there?
What is the impact of this data leak?
What kind of data is leaked
Who became a victim of this leak?
Why didn’t we prevent this?
Etc., etc., etc.
All of these questions meant a huge amount of paperwork that my team and I had to fill out. Some of the answers were obvious, but since it was an R&D project with a lot of pressure, we did not apply proper logging yet. Finding out who was a victim was impossible. Maybe the breach wasn’t even used by malicious people. We simply didn’t know.
The worst part is that higher-level management was now blaming us with remarks like, “it is a shame that our engineers are not security-aware at all” or “this team is incompetent, and you should have caught this”. On top of the huge amount of paperwork, managers started to micromanage everything without having any proper knowledge of development or security.
Lessons learned
The first thing we did was log everything. Dealing with a security problem is one thing, but the aftermath and questions are something completely different. Next, we took a good look at our data model to see if we exposed data in our REST endpoint we did not need in the front-end.
The engineering team also used this incident to push back on the product managers’ high demand and pressure. However, security awareness should be about adapting security in the development process, and not about blaming people. In my honest opinion the proper solution should have been: investing in culture and choose the appropriate tooling that helps the development process. Not long after this, I decided to move away from this company…
Be sure to follow us on Twitter (@snyksec) to hear more security horror stories like this one! #31DaysOfSecurity