Data is everywhere — and increasingly difficult to track and manage. Worse still — as Lepide’s Philip Robinson notes — much of it is redundant, obsolete and trivial (ROT) and can be a prime target for cyberattack.
Managing the resulting data sprawl, Robinson says, begins with deciding where to store the data, then establishing comprehensive data access governance that incorporates access control, disposal, risk management and compliance considerations.
“Storing all your data in the cloud will make it more accessible to your employees – thus improving productivity (for example),” he writes.
Automated data discovery and classification solutions can be employed to ensure the company knows exactly what data it has, where it is, and who has access to it.
Data deduplication tools primarily used for backup and restore can be used to remove duplicate data from enterprise repositories, while older data can easily be removed by searching the content based on dates of last access.
Data security platforms can aggregate and correlate event data from multiple sources, identifying usage patterns with machine learning, or removing inactive ‘ghost’ user accounts.
Users can scan on-premise and cloud repositories for sensitive data and classify it according to a chosen schema, while old information can be moved to a ‘redundant’ file rather than deleted.
“Most sophisticated solutions will allow you to choose a classification taxonomy that aligns with the data privacy laws that are relevant to your industry,” Robinson says.
Lepide’s data protection platform with behavioural analysis has been used by the UK’s Home Office and NHS as well as blue-chip corporations like Deloitte and Fujitsu.
( Photo by Martin Bargl on Unsplash )