Data security is one of those pain points for an organization as it grows. A software company may have started out with only a handful of employees who share responsibilities – and admin access – to databases and applications. But as it grows, there quickly becomes a need for things like role-based security models, backup and restore protocols and user credential policies.
These aren’t particularly exciting or compelling – until an event occurs that highlights their absence. By then it’s too late. This past summer there was an interesting thread on Reddit posted by an employee who accidentally deleted a production database on his first day at a new job. The tl;dr: it wasn’t his fault, but poor company practices; specifically, the inclusion of production database credentials on an employee onboarding document.
No question this was an error on the part of the company, and most of the comments posted to the thread addressed the misdirection of blame by its management. But it’s also easy to see how this could happen in other organizations, with databases and applications only one Word doc away from being compromised.
The problem is one that IT people are familiar with – you don’t get gold stars for preventing the disasters that never occur. That’s why a thread like this one is worth sharing internally as a starting point for changing a software company’s attitude towards security. And it is a cultural change more than anything else. Some of these items can be implemented with a memo from upper management, and a bit of dogged follow up by DBAs and other admins.
For example, there should be a clear separation between production and sandbox/testing environments. No one outside the organization should have access to anything except (the appropriate) production data, and production data should never be used in testing.
Similarly, production data should never be visible. That means getting in the habit of logging out of production after use, and locking away or destroying hardcopies of production data. It also means enforcing stricter policies for production credentials (which would have prevented the Redditor’s problem from happening.) And of course, enforcing password complexity as well.
One best practice is complying with the principle of least privilege, that is, you grant users only the access they need to do their work, nothing more. It makes sense that clerical staff shouldn’t get access to HR data; and they probably shouldn’t be allowed to update or delete the data they are able to see.
In fact, security should be role-, not user-based. That may mean creating a slew of additional roles to give users only the access they need. Hopefully your organization’s applications allow you to implement additional roles. We enable this for customers of our analytics platform with an administrative UI that lets them implement very granular security for user roles.
The principle of least privilege doesn’t simply apply to end users. Database admins (who can do the most damage) should have read-only or other limited access credentials as well, and should use them as much as possible. Nobody should be using “God” level access on a daily basis. It’s also important to remember that DBAs should not be responsible for the contents of a database. The line of business owns it and owns changes to it. Could a DBA create and execute SQL that will update/fix/add that one field you need? Certainly. Should they be changing data on the fly? No. Nobody should be playing God.
It’s also worth considering that worst case scenario of your DBA quitting today, or even of doing something malicious to your system. Sometimes attacks are internal – for example, Edward Snowden used his admin rights to steal NSA secrets. It’s more likely that technical people in your organization have moved on to different roles, moving from DBA to CTO, for example. They may still have admin access that they no longer need. It may take a bit of diplomacy to remove access from people who were once responsible for those systems, which is why making it a company policy can help. Regular audits are essential to identify these situations. And of course, timely backups are critical, so if a database is damaged or corrupted, data can be recovered with a minimum of rework.
None of this is fun. None of it is going to add to a company’s bottom line. But the alternative, as depicted in this reddit thread, is even worse.