I have been asked a few times why I host my blog on Blogger and not on my own site.
The answer is simple, right now I do not have much time to manage a blog platform and do not feel the need. My personal blog is a secondary activity that I do for fun and in the hope that people find it useful, so there is no incentive to invest a great time on the running of it.
In the past, I not only hosted my own blog, but I also wrote the code behind it. This was a great learning experience but requires a time investment that I do not currently have.
But, you are an expert, you should be showing your skills! Well, yes, I do know what I am doing and that is one of the reasons why I am outsourcing the running of the blog to Google. By doing this, I know I will not make money from the blog (not that I intend to, anyway) and lose control over many things that are only possible when you self-host, but I also do not have to keep the system up to date, manage security issues on my code or any libraries I am using or keep up with spam comments (not that people comment anyway).
Finally, my blog is a perfect candidate for being just static content hosted somewhere but that would require me to use a third party for comments, at which point, I am not far off from where I am now, so I might as well take full advantage of someone else hosting the blog for me.
Tuesday, 28 March 2017
GDPR - The five things you wanted to know (but were too afraid to ask)
This entry was originally posted on the Workshare blog, at https://www.workshare.com/blog/gdpr-the-five-things-you-need-to-know-but-were-too-afraid-to-ask.
The General Data Protection Regulation (GDPR) is a new law that will come into effect in the European Union (EU) on the 25th of May, 2018. Its primary goal is to strengthen and unify data protection for individuals in the EU. The GDPR replaces the Data Protection Directive from 1995 and marks a major departure in many aspects.
Without further ado, let’s look at the five things you need to know about the GDPR and how it changes the rules.
1. Changes the definition of personal data
Article 4 defines personal data as ‘any information relating to an identified or identifiable natural person’. Up to now, some clarification was required to define ‘identifiable’, but this has been clarified in Recital 26 as being possible to identify by ‘all means reasonably likely to be used’. This means that while data may not be by itself identifiable by the business that holds it, it may still be considered personal if it can be used to identify a person via aggregation with other data sources.
The GDPR also clarifies that personal identification does not need to be a name, it includes things like IDs, online handles, IP addresses and cookies.
2. Requires consent
Valid consent will be required before storing or processing personal data. This consent includes the data collected and the purposes it's going to be used for.
3. Depends on the data subject location, not just the company location
In the past, EU data protection regulation only applied to businesses within the EU. The GDPR specifies that any company that handles the personal data of individuals within the EU are now responsible for the data and must follow regulations, no matter where the company is located. This means you can’t escape this regulation just by being outside the EU region.
Of course, by being in the EU, you’re still subject to the regulation, no matter where your data subjects are.
4. Includes responsibilities for processors, not just controllers
In GDPR parlance, the controller is the business that receives the data and consent directly from the data subject, while the processor is any company that processes or stores the data for the controller.
Under the GDPR, processors are required to demonstrate the same level of compliance and security as the controller. The processor is also required to notify of any breaches ‘without undue delay’. Considering that the controller is required, by law, to promptly notify the authorities of any breaches, this is a major point of contention and the relation between controller and processor must be governed by a binding contract.
Processors are also not allowed to transfer data to any sub-processor without written agreement with the controller and, even in the case of an existing agreement, prior notice will need to be provided in case the controller wants to raise objections.
5. Increases and clarifies the rights of the data subject
The GDPR includes provisions regarding the right of modification and erasure of data, especially in cases of non-compliance with lawfulness.
Just two more things, promise...
Sorry, it’s more than 5 things, but there is much information to digest. At this point, you can see that the new legislation brings major changes to the management of personal data, but we’re not done yet!
We’ve left the best for last.
6. Breach notification
Any data breach including personal data must be reported to the relevant authority within 72 hours. There is no definition of what the lowest level of a data breach is, so potentially any breach at all will require notification.
Individuals concerned must also be notified if it is determined that they will suffer adverse effects.
7. Severe penalties
Failure to follow the GDPR, including failure to notify of a breach, may include a fine of up to 20m EUR or 4% of global revenue for the previous year, whichever is greater, as well as regular audits.
As you can see, the GDPR is an extensive change to data protection regulation in the EU, extending its protection beyond the existing level and scope and massively increasing requirements and fines.
One key thing about this legislation is the fact that it comes into effect in May 2018, which is not long from now, considering that it requires a complete overhaul in the way data is managed.
How can I prepare for the GDPR?
The first thing to do is get an understanding of the data you currently handle. You need to know all the data you process and which of it is considered personal data.
Once you’ve determined what data you handle, you must design and implement processes for correctly handling that data, including all protective measures to prevent breaches. The standard approach is to establish a baseline of what is considered to be normal behaviour and then set protective measures to initially alert on abnormal behaviour or breaches.
Don’t forget, the process is not only about detection and prevention, it must also consider how the business will deal with a breach, including notification and response times to avoid financial penalties.
Last, but not least, you must train your staff to identify and correctly handle personal information and how to escalate quickly in the case of a breach. People will make mistakes, so your processes should prevent errors from causing a breach when possible and, if not, quickly raise awareness of the existence of a breach so it can be investigated, resolved and reported.
The General Data Protection Regulation (GDPR) is a new law that will come into effect in the European Union (EU) on the 25th of May, 2018. Its primary goal is to strengthen and unify data protection for individuals in the EU. The GDPR replaces the Data Protection Directive from 1995 and marks a major departure in many aspects.
1. Changes the definition of personal data
Article 4 defines personal data as ‘any information relating to an identified or identifiable natural person’. Up to now, some clarification was required to define ‘identifiable’, but this has been clarified in Recital 26 as being possible to identify by ‘all means reasonably likely to be used’. This means that while data may not be by itself identifiable by the business that holds it, it may still be considered personal if it can be used to identify a person via aggregation with other data sources.
The GDPR also clarifies that personal identification does not need to be a name, it includes things like IDs, online handles, IP addresses and cookies.
2. Requires consent
Valid consent will be required before storing or processing personal data. This consent includes the data collected and the purposes it's going to be used for.
3. Depends on the data subject location, not just the company location
In the past, EU data protection regulation only applied to businesses within the EU. The GDPR specifies that any company that handles the personal data of individuals within the EU are now responsible for the data and must follow regulations, no matter where the company is located. This means you can’t escape this regulation just by being outside the EU region.
Of course, by being in the EU, you’re still subject to the regulation, no matter where your data subjects are.
4. Includes responsibilities for processors, not just controllers
In GDPR parlance, the controller is the business that receives the data and consent directly from the data subject, while the processor is any company that processes or stores the data for the controller.
Under the GDPR, processors are required to demonstrate the same level of compliance and security as the controller. The processor is also required to notify of any breaches ‘without undue delay’. Considering that the controller is required, by law, to promptly notify the authorities of any breaches, this is a major point of contention and the relation between controller and processor must be governed by a binding contract.
Processors are also not allowed to transfer data to any sub-processor without written agreement with the controller and, even in the case of an existing agreement, prior notice will need to be provided in case the controller wants to raise objections.
5. Increases and clarifies the rights of the data subject
The GDPR includes provisions regarding the right of modification and erasure of data, especially in cases of non-compliance with lawfulness.
Just two more things, promise...
Sorry, it’s more than 5 things, but there is much information to digest. At this point, you can see that the new legislation brings major changes to the management of personal data, but we’re not done yet!
We’ve left the best for last.
6. Breach notification
Any data breach including personal data must be reported to the relevant authority within 72 hours. There is no definition of what the lowest level of a data breach is, so potentially any breach at all will require notification.
Individuals concerned must also be notified if it is determined that they will suffer adverse effects.
7. Severe penalties
Failure to follow the GDPR, including failure to notify of a breach, may include a fine of up to 20m EUR or 4% of global revenue for the previous year, whichever is greater, as well as regular audits.
As you can see, the GDPR is an extensive change to data protection regulation in the EU, extending its protection beyond the existing level and scope and massively increasing requirements and fines.
One key thing about this legislation is the fact that it comes into effect in May 2018, which is not long from now, considering that it requires a complete overhaul in the way data is managed.
How can I prepare for the GDPR?
The first thing to do is get an understanding of the data you currently handle. You need to know all the data you process and which of it is considered personal data.
Once you’ve determined what data you handle, you must design and implement processes for correctly handling that data, including all protective measures to prevent breaches. The standard approach is to establish a baseline of what is considered to be normal behaviour and then set protective measures to initially alert on abnormal behaviour or breaches.
Don’t forget, the process is not only about detection and prevention, it must also consider how the business will deal with a breach, including notification and response times to avoid financial penalties.
Last, but not least, you must train your staff to identify and correctly handle personal information and how to escalate quickly in the case of a breach. People will make mistakes, so your processes should prevent errors from causing a breach when possible and, if not, quickly raise awareness of the existence of a breach so it can be investigated, resolved and reported.
The Security Evangelist - Lesson V: Availability
This entry was originally posted on the Workshare blog, at https://www.workshare.com/blog/security-evangelist-lesson-v-availability.
The final aspect of security I want to discuss in this series is Availability; the ability to access data when needed. While this is normally something that is not considered part of security, it is part of the CIA (Confidentiality, Integrity, Availability) approach that we use at Workshare.
Availability can be in conflict with security, especially with confidentiality; after all, the most secure computer is one unplugged from the network and turned off, but that is not useful if you need data.
There are different levels of availability issues, from temporary ones to complete data loss and all of them need to be considered.
In order for data to always be available when needed, we have to first understand the requirements.
Availability considerations usually include the following elements:
The move to the cloud has changed completely the approach to availability. While in the past the approach was to set up systems that did not fail, with enough capacity to handle peak load, now the normal approach is to have fault-tolerant systems that can scale up and down as required. This approach can be both cheaper and more reliable, but it requires design with those constraints in mind and will complicate the solution.
At Workshare services are designed with availability and scalability in mind. All of our services are replicated, with minimal single points of failure and a multi-cloud approach to ensure that an outage at our main provider will not affect our customers.
This is the final post in The Evangelist series. I hope you have enjoyed them!
The final aspect of security I want to discuss in this series is Availability; the ability to access data when needed. While this is normally something that is not considered part of security, it is part of the CIA (Confidentiality, Integrity, Availability) approach that we use at Workshare.
Availability can be in conflict with security, especially with confidentiality; after all, the most secure computer is one unplugged from the network and turned off, but that is not useful if you need data.
There are different levels of availability issues, from temporary ones to complete data loss and all of them need to be considered.
In order for data to always be available when needed, we have to first understand the requirements.
Availability considerations usually include the following elements:
- Determining availability requirements.
- Availability periods: Some data must be available 24/7, but other data may only be needed Monday to Friday - during office hours. If you do not need 24/7 availability you will be able to schedule maintenance out of business hours without affecting your users.
- Up-time requirements: Due to the nature of computer systems, it is impossible to guarantee 100% availability and, even if it was possible, it may not be cost effective. Usually, up-time requirements are guaranteed by a Service Level Agreement (SLA) with your customers.
- Definition of what uptime is: In most cases, availability SLAs may not cover the whole infrastructure or service or include hard limits on response times.
- Risk analysis: Identifying the different things that may go wrong in the infrastructure (including human error), how they will affect availability and how they can be mitigated.
- Technical measures: There are a number of technical measures to help with availability. At the most basic level, you can increase availability (and cost!) by adding redundancy and eliminating, or at least reducing, single points of failure, but you can go all the way into self-healing systems.
- Constraints: There will be multiple constraints at different levels, such as monetary, architectural, staffing or even technical ones, which will lead the requirements.
- Monitoring: This is a combination of automation and manual intervention to verify that the system is "up". A common approach to monitoring sets different thresholds that go from healthy to warning (the system is approaching its limits) to critical, where the system is unavailable.
- Processes: A major aspect of maintaining high levels of availability
is recovering from failure situations. This includes technical measures
as well as processes and documentation ensuring that issues are handled
quickly and correctly.
- One often forgotten approach is the existence of Business Continuity and Disaster Recovery processes and tools that ensure the system can be brought back up after major incidents with the infrastructure or business.
The move to the cloud has changed completely the approach to availability. While in the past the approach was to set up systems that did not fail, with enough capacity to handle peak load, now the normal approach is to have fault-tolerant systems that can scale up and down as required. This approach can be both cheaper and more reliable, but it requires design with those constraints in mind and will complicate the solution.
At Workshare services are designed with availability and scalability in mind. All of our services are replicated, with minimal single points of failure and a multi-cloud approach to ensure that an outage at our main provider will not affect our customers.
This is the final post in The Evangelist series. I hope you have enjoyed them!
Cisco WebEx: What went wrong?!
This entry was originally posted at https://www.workshare.com/blog/cisco-webex-what-went-wrong.
I try to keep up with the latest security news, but sometimes it feels like it's impossible to read everything that happens - too many things going wrong too many times.
One of the most important ones I have seen of late is a remote-execution hole in the Chrome plugin for WebEx, a conferencing program widely used by around 20M people across the world, particularly in enterprises.
What this means, in plain English, is that by visiting a URL in your Chrome browser, a remote attacker can then run any software on your computer with your current permissions and without you having to do anything. All you have to do is click on the wrong link on your email, Slack, Skype or a website and you may be in for someone doing whatever they want with your computer.
The interesting part is how it works. It looks like the plugin has a backdoor/remote command execution capability. What this does is allow you to control it remotely. The plugin also includes a C runtime, a low-level library that provides various bits of functionality and, which is what allows you to run arbitrary commands via a function to execute arbitrary commands at the operating system level.
How did it get there? We have no idea.
We can guess that it was done during development time, to be able to test different parts of the application and was then forgotten or maybe put there on purpose, but either way it indicates that Cisco security practices have been shaky (to say the least). The fact that the URL requires a reasonably complicated string to trigger the behaviour may indicate that there was some effort to secure the application, which was ineffective.
The recommended solution was to remove the affected version and update to version 1.0.3, but, again, it was not properly tested and did not fully resolve the issue. Any XSS from webex.com would have still allowed a remote attacker to run things on your system. Version 1.0.5, which is the patched version, is still vulnerable. For our clients who are users, or for any users in fact, the only safe option right now is to fully remove the plugin until Cisco issues a valid fix. If you really need the plugin, at the very least upgrade to 1.0.5.
And, be careful out there.
References:
WebEx security issue: https://bugs.chromium.org/p/project-zero/issues/detail?id=1096
Issue with the original fix: https://twitter.com/filosottile/status/823655843388395525
Long standing VPN bug: https://blogs.cisco.com/security/shadow-brokers
I try to keep up with the latest security news, but sometimes it feels like it's impossible to read everything that happens - too many things going wrong too many times.
One of the most important ones I have seen of late is a remote-execution hole in the Chrome plugin for WebEx, a conferencing program widely used by around 20M people across the world, particularly in enterprises.
What this means, in plain English, is that by visiting a URL in your Chrome browser, a remote attacker can then run any software on your computer with your current permissions and without you having to do anything. All you have to do is click on the wrong link on your email, Slack, Skype or a website and you may be in for someone doing whatever they want with your computer.
The interesting part is how it works. It looks like the plugin has a backdoor/remote command execution capability. What this does is allow you to control it remotely. The plugin also includes a C runtime, a low-level library that provides various bits of functionality and, which is what allows you to run arbitrary commands via a function to execute arbitrary commands at the operating system level.
How did it get there? We have no idea.
We can guess that it was done during development time, to be able to test different parts of the application and was then forgotten or maybe put there on purpose, but either way it indicates that Cisco security practices have been shaky (to say the least). The fact that the URL requires a reasonably complicated string to trigger the behaviour may indicate that there was some effort to secure the application, which was ineffective.
The recommended solution was to remove the affected version and update to version 1.0.3, but, again, it was not properly tested and did not fully resolve the issue. Any XSS from webex.com would have still allowed a remote attacker to run things on your system. Version 1.0.5, which is the patched version, is still vulnerable. For our clients who are users, or for any users in fact, the only safe option right now is to fully remove the plugin until Cisco issues a valid fix. If you really need the plugin, at the very least upgrade to 1.0.5.
And, be careful out there.
References:
WebEx security issue: https://bugs.chromium.org/p/project-zero/issues/detail?id=1096
Issue with the original fix: https://twitter.com/filosottile/status/823655843388395525
Long standing VPN bug: https://blogs.cisco.com/security/shadow-brokers
The Security Evangelist, Lesson IV: Integrity continued...
This post was originally posted on the Workshare blog at https://www.workshare.com/blog/the-security-evangelist-lesson-iii-integrity-continued.
Data integrity is quite a subject and couldn't be covered in one blog, so here's part ii. How to prove the integrity of your data...
Integrity guarantees do not prevent authorized modification of data, otherwise it would be impossible to add new versions of a document. What they do is provide users with ways of identifying all changes and who made them.
To be able to prove the integrity of your data you have to establish what the baseline is; the original data that you want to protect. A normal assumption is that the first version of a document is the canonical version and that any others that appear later are a modification of the original.
You then have to establish policies and controls around handling the data. This will ensure people understand how to manage it, whether it’s critical or not, and what to do when something goes wrong.
It is important to set permissions and tooling to prevent accidental modification and provide monitoring and metrics that enable you to assess events as they happen, depending on the criticality of the data in question.
There must be an established process to identify changes to data, including details about the changes themselves - the authors, modification times and any other data that will enable auditing. It should not be possible to modify data without leaving traces and common policy indicates that rolling back is impossible. If a version of a document is detected to be faulty, a new version undoing the mistake should be added and both kept for historical purposes.
Logs must be kept for as long as legally required and must include enough protection to prevent tampering. A common approach for this is to allow systems to generate logs that get exported to a separate system to prevent tampering and require a completely different set of permissions to access the logs. The ideal approach is to enable append-only logs, where data cannot be modified once it is in the log, no matter what level of permission you may have.
Finally, you must set up policies and processes to deal with unauthorized modification. This must not be an afterthought, in the event of a major breach you must be working on resolving the issue, not trying to understand how you can start investigating or who you should be talking to. All requirements should be established in advance, including communications, reporting and crisis management. The criticality of this last step cannot be underestimated, specially with new data protection legislation coming out, such as the GDPR, which requires timely reporting to customers and the authorities on any data breaches.
The importance of data integrity must be fully understood across a business. A corrupt or incorrectly modified document in circulation will not only incur reputational loss, it may also cause large amounts of financial pain in the form of legal cases and external audits.
Once policies and controls are in place, it becomes a matter of regularly reviewing and verifying them and taking any learnings from events and the way they are handled, whether successful or not.
In later posts, we will look at how blockchain can help create logs that are open and tamper-proof.
Data integrity is quite a subject and couldn't be covered in one blog, so here's part ii. How to prove the integrity of your data...
Integrity guarantees do not prevent authorized modification of data, otherwise it would be impossible to add new versions of a document. What they do is provide users with ways of identifying all changes and who made them.
To be able to prove the integrity of your data you have to establish what the baseline is; the original data that you want to protect. A normal assumption is that the first version of a document is the canonical version and that any others that appear later are a modification of the original.
You then have to establish policies and controls around handling the data. This will ensure people understand how to manage it, whether it’s critical or not, and what to do when something goes wrong.
It is important to set permissions and tooling to prevent accidental modification and provide monitoring and metrics that enable you to assess events as they happen, depending on the criticality of the data in question.
There must be an established process to identify changes to data, including details about the changes themselves - the authors, modification times and any other data that will enable auditing. It should not be possible to modify data without leaving traces and common policy indicates that rolling back is impossible. If a version of a document is detected to be faulty, a new version undoing the mistake should be added and both kept for historical purposes.
Logs must be kept for as long as legally required and must include enough protection to prevent tampering. A common approach for this is to allow systems to generate logs that get exported to a separate system to prevent tampering and require a completely different set of permissions to access the logs. The ideal approach is to enable append-only logs, where data cannot be modified once it is in the log, no matter what level of permission you may have.
Finally, you must set up policies and processes to deal with unauthorized modification. This must not be an afterthought, in the event of a major breach you must be working on resolving the issue, not trying to understand how you can start investigating or who you should be talking to. All requirements should be established in advance, including communications, reporting and crisis management. The criticality of this last step cannot be underestimated, specially with new data protection legislation coming out, such as the GDPR, which requires timely reporting to customers and the authorities on any data breaches.
The importance of data integrity must be fully understood across a business. A corrupt or incorrectly modified document in circulation will not only incur reputational loss, it may also cause large amounts of financial pain in the form of legal cases and external audits.
Once policies and controls are in place, it becomes a matter of regularly reviewing and verifying them and taking any learnings from events and the way they are handled, whether successful or not.
In later posts, we will look at how blockchain can help create logs that are open and tamper-proof.
Subscribe to:
Posts (Atom)