I have been asked a few times why I host my blog on Blogger and not on my own site.
The answer is simple, right now I do not have much time to manage a blog platform and do not feel the need. My personal blog is a secondary activity that I do for fun and in the hope that people find it useful, so there is no incentive to invest a great time on the running of it.
In the past, I not only hosted my own blog, but I also wrote the code behind it. This was a great learning experience but requires a time investment that I do not currently have.
But, you are an expert, you should be showing your skills! Well, yes, I do know what I am doing and that is one of the reasons why I am outsourcing the running of the blog to Google. By doing this, I know I will not make money from the blog (not that I intend to, anyway) and lose control over many things that are only possible when you self-host, but I also do not have to keep the system up to date, manage security issues on my code or any libraries I am using or keep up with spam comments (not that people comment anyway).
Finally, my blog is a perfect candidate for being just static content hosted somewhere but that would require me to use a third party for comments, at which point, I am not far off from where I am now, so I might as well take full advantage of someone else hosting the blog for me.
Gabriel Tabares' Blog
Technology at the management level
Tuesday, 28 March 2017
GDPR - The five things you wanted to know (but were too afraid to ask)
This entry was originally posted on the Workshare blog, at https://www.workshare.com/blog/gdpr-the-five-things-you-need-to-know-but-were-too-afraid-to-ask.
The General Data Protection Regulation (GDPR) is a new law that will come into effect in the European Union (EU) on the 25th of May, 2018. Its primary goal is to strengthen and unify data protection for individuals in the EU. The GDPR replaces the Data Protection Directive from 1995 and marks a major departure in many aspects.
Without further ado, let’s look at the five things you need to know about the GDPR and how it changes the rules.
1. Changes the definition of personal data
Article 4 defines personal data as ‘any information relating to an identified or identifiable natural person’. Up to now, some clarification was required to define ‘identifiable’, but this has been clarified in Recital 26 as being possible to identify by ‘all means reasonably likely to be used’. This means that while data may not be by itself identifiable by the business that holds it, it may still be considered personal if it can be used to identify a person via aggregation with other data sources.
The GDPR also clarifies that personal identification does not need to be a name, it includes things like IDs, online handles, IP addresses and cookies.
2. Requires consent
Valid consent will be required before storing or processing personal data. This consent includes the data collected and the purposes it's going to be used for.
3. Depends on the data subject location, not just the company location
In the past, EU data protection regulation only applied to businesses within the EU. The GDPR specifies that any company that handles the personal data of individuals within the EU are now responsible for the data and must follow regulations, no matter where the company is located. This means you can’t escape this regulation just by being outside the EU region.
Of course, by being in the EU, you’re still subject to the regulation, no matter where your data subjects are.
4. Includes responsibilities for processors, not just controllers
In GDPR parlance, the controller is the business that receives the data and consent directly from the data subject, while the processor is any company that processes or stores the data for the controller.
Under the GDPR, processors are required to demonstrate the same level of compliance and security as the controller. The processor is also required to notify of any breaches ‘without undue delay’. Considering that the controller is required, by law, to promptly notify the authorities of any breaches, this is a major point of contention and the relation between controller and processor must be governed by a binding contract.
Processors are also not allowed to transfer data to any sub-processor without written agreement with the controller and, even in the case of an existing agreement, prior notice will need to be provided in case the controller wants to raise objections.
5. Increases and clarifies the rights of the data subject
The GDPR includes provisions regarding the right of modification and erasure of data, especially in cases of non-compliance with lawfulness.
Just two more things, promise...
Sorry, it’s more than 5 things, but there is much information to digest. At this point, you can see that the new legislation brings major changes to the management of personal data, but we’re not done yet!
We’ve left the best for last.
6. Breach notification
Any data breach including personal data must be reported to the relevant authority within 72 hours. There is no definition of what the lowest level of a data breach is, so potentially any breach at all will require notification.
Individuals concerned must also be notified if it is determined that they will suffer adverse effects.
7. Severe penalties
Failure to follow the GDPR, including failure to notify of a breach, may include a fine of up to 20m EUR or 4% of global revenue for the previous year, whichever is greater, as well as regular audits.
As you can see, the GDPR is an extensive change to data protection regulation in the EU, extending its protection beyond the existing level and scope and massively increasing requirements and fines.
One key thing about this legislation is the fact that it comes into effect in May 2018, which is not long from now, considering that it requires a complete overhaul in the way data is managed.
How can I prepare for the GDPR?
The first thing to do is get an understanding of the data you currently handle. You need to know all the data you process and which of it is considered personal data.
Once you’ve determined what data you handle, you must design and implement processes for correctly handling that data, including all protective measures to prevent breaches. The standard approach is to establish a baseline of what is considered to be normal behaviour and then set protective measures to initially alert on abnormal behaviour or breaches.
Don’t forget, the process is not only about detection and prevention, it must also consider how the business will deal with a breach, including notification and response times to avoid financial penalties.
Last, but not least, you must train your staff to identify and correctly handle personal information and how to escalate quickly in the case of a breach. People will make mistakes, so your processes should prevent errors from causing a breach when possible and, if not, quickly raise awareness of the existence of a breach so it can be investigated, resolved and reported.
The General Data Protection Regulation (GDPR) is a new law that will come into effect in the European Union (EU) on the 25th of May, 2018. Its primary goal is to strengthen and unify data protection for individuals in the EU. The GDPR replaces the Data Protection Directive from 1995 and marks a major departure in many aspects.
1. Changes the definition of personal data
Article 4 defines personal data as ‘any information relating to an identified or identifiable natural person’. Up to now, some clarification was required to define ‘identifiable’, but this has been clarified in Recital 26 as being possible to identify by ‘all means reasonably likely to be used’. This means that while data may not be by itself identifiable by the business that holds it, it may still be considered personal if it can be used to identify a person via aggregation with other data sources.
The GDPR also clarifies that personal identification does not need to be a name, it includes things like IDs, online handles, IP addresses and cookies.
2. Requires consent
Valid consent will be required before storing or processing personal data. This consent includes the data collected and the purposes it's going to be used for.
3. Depends on the data subject location, not just the company location
In the past, EU data protection regulation only applied to businesses within the EU. The GDPR specifies that any company that handles the personal data of individuals within the EU are now responsible for the data and must follow regulations, no matter where the company is located. This means you can’t escape this regulation just by being outside the EU region.
Of course, by being in the EU, you’re still subject to the regulation, no matter where your data subjects are.
4. Includes responsibilities for processors, not just controllers
In GDPR parlance, the controller is the business that receives the data and consent directly from the data subject, while the processor is any company that processes or stores the data for the controller.
Under the GDPR, processors are required to demonstrate the same level of compliance and security as the controller. The processor is also required to notify of any breaches ‘without undue delay’. Considering that the controller is required, by law, to promptly notify the authorities of any breaches, this is a major point of contention and the relation between controller and processor must be governed by a binding contract.
Processors are also not allowed to transfer data to any sub-processor without written agreement with the controller and, even in the case of an existing agreement, prior notice will need to be provided in case the controller wants to raise objections.
5. Increases and clarifies the rights of the data subject
The GDPR includes provisions regarding the right of modification and erasure of data, especially in cases of non-compliance with lawfulness.
Just two more things, promise...
Sorry, it’s more than 5 things, but there is much information to digest. At this point, you can see that the new legislation brings major changes to the management of personal data, but we’re not done yet!
We’ve left the best for last.
6. Breach notification
Any data breach including personal data must be reported to the relevant authority within 72 hours. There is no definition of what the lowest level of a data breach is, so potentially any breach at all will require notification.
Individuals concerned must also be notified if it is determined that they will suffer adverse effects.
7. Severe penalties
Failure to follow the GDPR, including failure to notify of a breach, may include a fine of up to 20m EUR or 4% of global revenue for the previous year, whichever is greater, as well as regular audits.
As you can see, the GDPR is an extensive change to data protection regulation in the EU, extending its protection beyond the existing level and scope and massively increasing requirements and fines.
One key thing about this legislation is the fact that it comes into effect in May 2018, which is not long from now, considering that it requires a complete overhaul in the way data is managed.
How can I prepare for the GDPR?
The first thing to do is get an understanding of the data you currently handle. You need to know all the data you process and which of it is considered personal data.
Once you’ve determined what data you handle, you must design and implement processes for correctly handling that data, including all protective measures to prevent breaches. The standard approach is to establish a baseline of what is considered to be normal behaviour and then set protective measures to initially alert on abnormal behaviour or breaches.
Don’t forget, the process is not only about detection and prevention, it must also consider how the business will deal with a breach, including notification and response times to avoid financial penalties.
Last, but not least, you must train your staff to identify and correctly handle personal information and how to escalate quickly in the case of a breach. People will make mistakes, so your processes should prevent errors from causing a breach when possible and, if not, quickly raise awareness of the existence of a breach so it can be investigated, resolved and reported.
The Security Evangelist - Lesson V: Availability
This entry was originally posted on the Workshare blog, at https://www.workshare.com/blog/security-evangelist-lesson-v-availability.
The final aspect of security I want to discuss in this series is Availability; the ability to access data when needed. While this is normally something that is not considered part of security, it is part of the CIA (Confidentiality, Integrity, Availability) approach that we use at Workshare.
Availability can be in conflict with security, especially with confidentiality; after all, the most secure computer is one unplugged from the network and turned off, but that is not useful if you need data.
There are different levels of availability issues, from temporary ones to complete data loss and all of them need to be considered.
In order for data to always be available when needed, we have to first understand the requirements.
Availability considerations usually include the following elements:
The move to the cloud has changed completely the approach to availability. While in the past the approach was to set up systems that did not fail, with enough capacity to handle peak load, now the normal approach is to have fault-tolerant systems that can scale up and down as required. This approach can be both cheaper and more reliable, but it requires design with those constraints in mind and will complicate the solution.
At Workshare services are designed with availability and scalability in mind. All of our services are replicated, with minimal single points of failure and a multi-cloud approach to ensure that an outage at our main provider will not affect our customers.
This is the final post in The Evangelist series. I hope you have enjoyed them!
The final aspect of security I want to discuss in this series is Availability; the ability to access data when needed. While this is normally something that is not considered part of security, it is part of the CIA (Confidentiality, Integrity, Availability) approach that we use at Workshare.
Availability can be in conflict with security, especially with confidentiality; after all, the most secure computer is one unplugged from the network and turned off, but that is not useful if you need data.
There are different levels of availability issues, from temporary ones to complete data loss and all of them need to be considered.
In order for data to always be available when needed, we have to first understand the requirements.
Availability considerations usually include the following elements:
- Determining availability requirements.
- Availability periods: Some data must be available 24/7, but other data may only be needed Monday to Friday - during office hours. If you do not need 24/7 availability you will be able to schedule maintenance out of business hours without affecting your users.
- Up-time requirements: Due to the nature of computer systems, it is impossible to guarantee 100% availability and, even if it was possible, it may not be cost effective. Usually, up-time requirements are guaranteed by a Service Level Agreement (SLA) with your customers.
- Definition of what uptime is: In most cases, availability SLAs may not cover the whole infrastructure or service or include hard limits on response times.
- Risk analysis: Identifying the different things that may go wrong in the infrastructure (including human error), how they will affect availability and how they can be mitigated.
- Technical measures: There are a number of technical measures to help with availability. At the most basic level, you can increase availability (and cost!) by adding redundancy and eliminating, or at least reducing, single points of failure, but you can go all the way into self-healing systems.
- Constraints: There will be multiple constraints at different levels, such as monetary, architectural, staffing or even technical ones, which will lead the requirements.
- Monitoring: This is a combination of automation and manual intervention to verify that the system is "up". A common approach to monitoring sets different thresholds that go from healthy to warning (the system is approaching its limits) to critical, where the system is unavailable.
- Processes: A major aspect of maintaining high levels of availability
is recovering from failure situations. This includes technical measures
as well as processes and documentation ensuring that issues are handled
quickly and correctly.
- One often forgotten approach is the existence of Business Continuity and Disaster Recovery processes and tools that ensure the system can be brought back up after major incidents with the infrastructure or business.
The move to the cloud has changed completely the approach to availability. While in the past the approach was to set up systems that did not fail, with enough capacity to handle peak load, now the normal approach is to have fault-tolerant systems that can scale up and down as required. This approach can be both cheaper and more reliable, but it requires design with those constraints in mind and will complicate the solution.
At Workshare services are designed with availability and scalability in mind. All of our services are replicated, with minimal single points of failure and a multi-cloud approach to ensure that an outage at our main provider will not affect our customers.
This is the final post in The Evangelist series. I hope you have enjoyed them!
Cisco WebEx: What went wrong?!
This entry was originally posted at https://www.workshare.com/blog/cisco-webex-what-went-wrong.
I try to keep up with the latest security news, but sometimes it feels like it's impossible to read everything that happens - too many things going wrong too many times.
One of the most important ones I have seen of late is a remote-execution hole in the Chrome plugin for WebEx, a conferencing program widely used by around 20M people across the world, particularly in enterprises.
What this means, in plain English, is that by visiting a URL in your Chrome browser, a remote attacker can then run any software on your computer with your current permissions and without you having to do anything. All you have to do is click on the wrong link on your email, Slack, Skype or a website and you may be in for someone doing whatever they want with your computer.
The interesting part is how it works. It looks like the plugin has a backdoor/remote command execution capability. What this does is allow you to control it remotely. The plugin also includes a C runtime, a low-level library that provides various bits of functionality and, which is what allows you to run arbitrary commands via a function to execute arbitrary commands at the operating system level.
How did it get there? We have no idea.
We can guess that it was done during development time, to be able to test different parts of the application and was then forgotten or maybe put there on purpose, but either way it indicates that Cisco security practices have been shaky (to say the least). The fact that the URL requires a reasonably complicated string to trigger the behaviour may indicate that there was some effort to secure the application, which was ineffective.
The recommended solution was to remove the affected version and update to version 1.0.3, but, again, it was not properly tested and did not fully resolve the issue. Any XSS from webex.com would have still allowed a remote attacker to run things on your system. Version 1.0.5, which is the patched version, is still vulnerable. For our clients who are users, or for any users in fact, the only safe option right now is to fully remove the plugin until Cisco issues a valid fix. If you really need the plugin, at the very least upgrade to 1.0.5.
And, be careful out there.
References:
WebEx security issue: https://bugs.chromium.org/p/project-zero/issues/detail?id=1096
Issue with the original fix: https://twitter.com/filosottile/status/823655843388395525
Long standing VPN bug: https://blogs.cisco.com/security/shadow-brokers
I try to keep up with the latest security news, but sometimes it feels like it's impossible to read everything that happens - too many things going wrong too many times.
One of the most important ones I have seen of late is a remote-execution hole in the Chrome plugin for WebEx, a conferencing program widely used by around 20M people across the world, particularly in enterprises.
What this means, in plain English, is that by visiting a URL in your Chrome browser, a remote attacker can then run any software on your computer with your current permissions and without you having to do anything. All you have to do is click on the wrong link on your email, Slack, Skype or a website and you may be in for someone doing whatever they want with your computer.
The interesting part is how it works. It looks like the plugin has a backdoor/remote command execution capability. What this does is allow you to control it remotely. The plugin also includes a C runtime, a low-level library that provides various bits of functionality and, which is what allows you to run arbitrary commands via a function to execute arbitrary commands at the operating system level.
How did it get there? We have no idea.
We can guess that it was done during development time, to be able to test different parts of the application and was then forgotten or maybe put there on purpose, but either way it indicates that Cisco security practices have been shaky (to say the least). The fact that the URL requires a reasonably complicated string to trigger the behaviour may indicate that there was some effort to secure the application, which was ineffective.
The recommended solution was to remove the affected version and update to version 1.0.3, but, again, it was not properly tested and did not fully resolve the issue. Any XSS from webex.com would have still allowed a remote attacker to run things on your system. Version 1.0.5, which is the patched version, is still vulnerable. For our clients who are users, or for any users in fact, the only safe option right now is to fully remove the plugin until Cisco issues a valid fix. If you really need the plugin, at the very least upgrade to 1.0.5.
And, be careful out there.
References:
WebEx security issue: https://bugs.chromium.org/p/project-zero/issues/detail?id=1096
Issue with the original fix: https://twitter.com/filosottile/status/823655843388395525
Long standing VPN bug: https://blogs.cisco.com/security/shadow-brokers
The Security Evangelist, Lesson IV: Integrity continued...
This post was originally posted on the Workshare blog at https://www.workshare.com/blog/the-security-evangelist-lesson-iii-integrity-continued.
Data integrity is quite a subject and couldn't be covered in one blog, so here's part ii. How to prove the integrity of your data...
Integrity guarantees do not prevent authorized modification of data, otherwise it would be impossible to add new versions of a document. What they do is provide users with ways of identifying all changes and who made them.
To be able to prove the integrity of your data you have to establish what the baseline is; the original data that you want to protect. A normal assumption is that the first version of a document is the canonical version and that any others that appear later are a modification of the original.
You then have to establish policies and controls around handling the data. This will ensure people understand how to manage it, whether it’s critical or not, and what to do when something goes wrong.
It is important to set permissions and tooling to prevent accidental modification and provide monitoring and metrics that enable you to assess events as they happen, depending on the criticality of the data in question.
There must be an established process to identify changes to data, including details about the changes themselves - the authors, modification times and any other data that will enable auditing. It should not be possible to modify data without leaving traces and common policy indicates that rolling back is impossible. If a version of a document is detected to be faulty, a new version undoing the mistake should be added and both kept for historical purposes.
Logs must be kept for as long as legally required and must include enough protection to prevent tampering. A common approach for this is to allow systems to generate logs that get exported to a separate system to prevent tampering and require a completely different set of permissions to access the logs. The ideal approach is to enable append-only logs, where data cannot be modified once it is in the log, no matter what level of permission you may have.
Finally, you must set up policies and processes to deal with unauthorized modification. This must not be an afterthought, in the event of a major breach you must be working on resolving the issue, not trying to understand how you can start investigating or who you should be talking to. All requirements should be established in advance, including communications, reporting and crisis management. The criticality of this last step cannot be underestimated, specially with new data protection legislation coming out, such as the GDPR, which requires timely reporting to customers and the authorities on any data breaches.
The importance of data integrity must be fully understood across a business. A corrupt or incorrectly modified document in circulation will not only incur reputational loss, it may also cause large amounts of financial pain in the form of legal cases and external audits.
Once policies and controls are in place, it becomes a matter of regularly reviewing and verifying them and taking any learnings from events and the way they are handled, whether successful or not.
In later posts, we will look at how blockchain can help create logs that are open and tamper-proof.
Data integrity is quite a subject and couldn't be covered in one blog, so here's part ii. How to prove the integrity of your data...
Integrity guarantees do not prevent authorized modification of data, otherwise it would be impossible to add new versions of a document. What they do is provide users with ways of identifying all changes and who made them.
To be able to prove the integrity of your data you have to establish what the baseline is; the original data that you want to protect. A normal assumption is that the first version of a document is the canonical version and that any others that appear later are a modification of the original.
You then have to establish policies and controls around handling the data. This will ensure people understand how to manage it, whether it’s critical or not, and what to do when something goes wrong.
It is important to set permissions and tooling to prevent accidental modification and provide monitoring and metrics that enable you to assess events as they happen, depending on the criticality of the data in question.
There must be an established process to identify changes to data, including details about the changes themselves - the authors, modification times and any other data that will enable auditing. It should not be possible to modify data without leaving traces and common policy indicates that rolling back is impossible. If a version of a document is detected to be faulty, a new version undoing the mistake should be added and both kept for historical purposes.
Logs must be kept for as long as legally required and must include enough protection to prevent tampering. A common approach for this is to allow systems to generate logs that get exported to a separate system to prevent tampering and require a completely different set of permissions to access the logs. The ideal approach is to enable append-only logs, where data cannot be modified once it is in the log, no matter what level of permission you may have.
Finally, you must set up policies and processes to deal with unauthorized modification. This must not be an afterthought, in the event of a major breach you must be working on resolving the issue, not trying to understand how you can start investigating or who you should be talking to. All requirements should be established in advance, including communications, reporting and crisis management. The criticality of this last step cannot be underestimated, specially with new data protection legislation coming out, such as the GDPR, which requires timely reporting to customers and the authorities on any data breaches.
The importance of data integrity must be fully understood across a business. A corrupt or incorrectly modified document in circulation will not only incur reputational loss, it may also cause large amounts of financial pain in the form of legal cases and external audits.
Once policies and controls are in place, it becomes a matter of regularly reviewing and verifying them and taking any learnings from events and the way they are handled, whether successful or not.
In later posts, we will look at how blockchain can help create logs that are open and tamper-proof.
Thursday, 26 January 2017
How to handle a support request: GitLab
The same way as I complain when someone does a lousy job, I want to give props to a company that seems to do it just right.
I am a GitLab user by choice. A while ago I moved most of my repositories there. The initial reason for the move was that, at the time, GitHub did not allow private repositories for free users and I like to keep my projects private until I am ready to publish them.
Since then, I have not found a reason to move, it just works. Admittedly, I am a very light user, but I like what I see. As a security professional, I also like that they offer an on-premises solution and an open source one so you can fix issues if you find them.
A couple of days ago I got a GitLab invitation to a group I had not heard about. It sounded suspicious but I am currently waiting for a couple of technical tests to come my way and thought that it could be related and, in any case, there should be nothing that could attack my computer on the locked-down browser I use, so I accepted.
When I got into the group, there were over 180 users in there, but no other content. I dug out a bit more and all of the users had joined in the last 3 or 4 hours or were pending. At this point, I was sure that there was something wrong with this group, so I left and went to report it.
I sent an email to the support email (readily available in multiple places) and an email address that I guessed for the security team (no bounce, so it may actually have reached them) and got a receipt notification at 15:24. So far, so good.
At 16:34 I got a confirmation email to let me know that someone was looking into the reported issue.
At 16:38 got another confirmation email telling me that the user that invited me had been identified as a spammer and was being dealt with and a link referring to a ticket about a similar issue.
So, in less than 1:30 they read my report, performed an investigation and sent me a reply with the actions they were taking. Colour me impressed, specially because this is for a free account and I have never paid them a penny.
I have seen GitLab representatives in multiple technical sites. Whenever someone mentions any issue with the product, there will be one asking for more details and I have not seem them been rude or anything like that.
They have also been increasing the capabilities of the free version of the software as people requested them, while keeping obviously enterprise-related features for the paid one.
I was obviously pre-disposed to use them professionally before but now, after this experience, I am even keener. If they provide this level of support and responsiveness for free customers, I expect them to be great for paying ones.
Well done GitLab!
I am a GitLab user by choice. A while ago I moved most of my repositories there. The initial reason for the move was that, at the time, GitHub did not allow private repositories for free users and I like to keep my projects private until I am ready to publish them.
Since then, I have not found a reason to move, it just works. Admittedly, I am a very light user, but I like what I see. As a security professional, I also like that they offer an on-premises solution and an open source one so you can fix issues if you find them.
A couple of days ago I got a GitLab invitation to a group I had not heard about. It sounded suspicious but I am currently waiting for a couple of technical tests to come my way and thought that it could be related and, in any case, there should be nothing that could attack my computer on the locked-down browser I use, so I accepted.
When I got into the group, there were over 180 users in there, but no other content. I dug out a bit more and all of the users had joined in the last 3 or 4 hours or were pending. At this point, I was sure that there was something wrong with this group, so I left and went to report it.
I sent an email to the support email (readily available in multiple places) and an email address that I guessed for the security team (no bounce, so it may actually have reached them) and got a receipt notification at 15:24. So far, so good.
At 16:34 I got a confirmation email to let me know that someone was looking into the reported issue.
At 16:38 got another confirmation email telling me that the user that invited me had been identified as a spammer and was being dealt with and a link referring to a ticket about a similar issue.
So, in less than 1:30 they read my report, performed an investigation and sent me a reply with the actions they were taking. Colour me impressed, specially because this is for a free account and I have never paid them a penny.
I have seen GitLab representatives in multiple technical sites. Whenever someone mentions any issue with the product, there will be one asking for more details and I have not seem them been rude or anything like that.
They have also been increasing the capabilities of the free version of the software as people requested them, while keeping obviously enterprise-related features for the paid one.
I was obviously pre-disposed to use them professionally before but now, after this experience, I am even keener. If they provide this level of support and responsiveness for free customers, I expect them to be great for paying ones.
Well done GitLab!
Wednesday, 4 January 2017
Choosing Your Next Programming Language
It's the time for New Year resolutions and many of you will choose to learn a new programming language.
As with all choices, there are many ways of deciding the one you want but, having done this a few times, this is the criteria that I use.
First all, what do you want to achieve by learning a new language?
If you want to find a new job, your best bet is one of the really popular languages, which means something like JavaScript, Java, C#, Python, PHP or Ruby. These are not the most exciting languages but they will increase the possibility of getting a new job.
Of course, if you do know of a company you want to work for, just choose whatever stack they use.
Also keep in mind that you are not only choosing a new programming language, you are also choosing a complete environment, with different tooling, libraries, documentation and even may require different OS. With the rise of Open Source and Free software, the cost should not be that much of an issue anymore and most licenses will allow you to use the software for free, but it's worth keeping an eye on.
If you would like to improve your development skills, there are two approaches that you can take, take a language that you already know and focusing in pushing the boundaries, EG. writing a complete new framework from scratch, try focusing on areas you do not normally do like doing embedded work if you normally do desktop applications or backend if you do frontend.
You can also choose a new language, using a different paradigm from what you normally use. For functional languages, you can use Haskell (pure, lazy evaluation), Scala (hybrid functional/oop on the JVM), F# (hybrid functional/oop on the CLR) or OCaml (hybrid functional/oop, compiled to native). For actor based systems, Erlang, Elixir (a more modern language on the Erlang VM, with improved libraries, tooling and macros), Scala. If object oriented is what you want, you could look at Ruby (dynamically typed, OO), Smalltalk (the daddy of OOP) or many others. If you want to program with statically typed languages, you can choose a new one (Go, Rust and Swift are popular) or go with statically typed languages that extend dynamic ones, such as Flow or TypeScript.
A different (and complimentary) way to choose a language is to decided what you want to do. Web development? Java, C#, Ruby, Python, PHP, JavaScript. Web APIs? The previous ones plus Go. Desktop application development? C#, Java, C++. Mobile applications? Swift for iOS, Java for Android, JavaScript for both (with Cordova/React Native), C# with Xamarin for both. System and command-line development? C, C++, Rust.
Again, you will often use the language that provides the frameworks and libraries that you need. If you want to do Windows desktop development, your best bet is C#, if you are doing Linux, probably C++ or C, etc.
In my experience, the most important thing about learning a new language is to find a project that you want to do and driving it to completion. Start small and then, once you have something completed, then add to it or find a more ambitious target.
Before someone asks, my chosen languages for early 2016 are a completely new language for me in an area I haven't done for a while, systems programming, for which I've chosen Rust and a language that I know but haven't done seriously for a while, JavaScript, both backend and frontend.
As with all choices, there are many ways of deciding the one you want but, having done this a few times, this is the criteria that I use.
First all, what do you want to achieve by learning a new language?
If you want to find a new job, your best bet is one of the really popular languages, which means something like JavaScript, Java, C#, Python, PHP or Ruby. These are not the most exciting languages but they will increase the possibility of getting a new job.
Of course, if you do know of a company you want to work for, just choose whatever stack they use.
Also keep in mind that you are not only choosing a new programming language, you are also choosing a complete environment, with different tooling, libraries, documentation and even may require different OS. With the rise of Open Source and Free software, the cost should not be that much of an issue anymore and most licenses will allow you to use the software for free, but it's worth keeping an eye on.
If you would like to improve your development skills, there are two approaches that you can take, take a language that you already know and focusing in pushing the boundaries, EG. writing a complete new framework from scratch, try focusing on areas you do not normally do like doing embedded work if you normally do desktop applications or backend if you do frontend.
You can also choose a new language, using a different paradigm from what you normally use. For functional languages, you can use Haskell (pure, lazy evaluation), Scala (hybrid functional/oop on the JVM), F# (hybrid functional/oop on the CLR) or OCaml (hybrid functional/oop, compiled to native). For actor based systems, Erlang, Elixir (a more modern language on the Erlang VM, with improved libraries, tooling and macros), Scala. If object oriented is what you want, you could look at Ruby (dynamically typed, OO), Smalltalk (the daddy of OOP) or many others. If you want to program with statically typed languages, you can choose a new one (Go, Rust and Swift are popular) or go with statically typed languages that extend dynamic ones, such as Flow or TypeScript.
A different (and complimentary) way to choose a language is to decided what you want to do. Web development? Java, C#, Ruby, Python, PHP, JavaScript. Web APIs? The previous ones plus Go. Desktop application development? C#, Java, C++. Mobile applications? Swift for iOS, Java for Android, JavaScript for both (with Cordova/React Native), C# with Xamarin for both. System and command-line development? C, C++, Rust.
Again, you will often use the language that provides the frameworks and libraries that you need. If you want to do Windows desktop development, your best bet is C#, if you are doing Linux, probably C++ or C, etc.
In my experience, the most important thing about learning a new language is to find a project that you want to do and driving it to completion. Start small and then, once you have something completed, then add to it or find a more ambitious target.
Before someone asks, my chosen languages for early 2016 are a completely new language for me in an area I haven't done for a while, systems programming, for which I've chosen Rust and a language that I know but haven't done seriously for a while, JavaScript, both backend and frontend.
Subscribe to:
Posts (Atom)