Friday, November 3, 2017

RFC8280 - Human Rights Protocol

RFC 8280: odd but has the potential to become one of the most important RFCs ever. The same way at some point every RFC has a section called "Security Considerations", Security is now but one of basic aspects when designing a new technology.

Welcome to other considerations. Most are (arguably) absolute and consensual, e.g., Heterogeneity Support or Reliability. Others are not quite so such as Content Agnosticism, Censorship Resistance, Privacy.

Just coming up with such a list is fascinating. I'd have some to add -- as in for consideration when designing a new protocol -- while others feel redundant or not advisable at all (is "decentralisation" always desirable?")

Here's a list of the "moral content" of a protocol:

6.2.1. Connectivity .......................................43
6.2.2. Privacy ............................................43
6.2.3. Content Agnosticism ................................44
6.2.4. Security ...........................................45
6.2.5. Internationalization ...............................46
6.2.6. Censorship Resistance ..............................47
6.2.7. Open Standards .....................................48
6.2.8. Heterogeneity Support ..............................50
6.2.9. Anonymity ..........................................51
6.2.10. Pseudonymity ......................................51
6.2.11. Accessibility .....................................53
6.2.12. Localization ......................................53
6.2.13. Decentralization ..................................54
6.2.14. Reliability .......................................55
6.2.15. Confidentiality ...................................56
6.2.16. Integrity .........................................58
6.2.17. Authenticity ......................................59
6.2.18. Adaptability ......................................60
6.2.19. Outcome Transparency ..............................61

Thursday, August 31, 2017

Privacy done wrong - online registrations

Companies: Data Privacy is far more than a Privacy Notice.

Having a gmail email address for several years means getting lots of emails from people that either mistyped their email address or have little practice with new (!) technologies. Add to this that many companies do not check first whether the email is correct.

Can you already see the GDPR fines?

I got a perfect example. I got an email about a registration on a large company. I was never asked for a confirmation. Art 5.e is already at breach:
(d) accurate and, where necessary, kept up to date; every reasonable step must be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay;
I try to find a way to tell them this. The only way seems to be to log in which I can't because I do not have a password. So I request one to try to remove or change my email address. It comes and I log in. What do I see?

* the full name
* the full address
* the IRS number (!)
* the phone number
* date of birth

This is particularly dangerous because the person in question is vulnerable. Theft identity made easy. With some social engineering, I can only imagine how nasty this could become.

To add further incompetence, the email field cannot be changed. Finally, there's a pre-ticked checkbox to accept the T&Cs.

Can't wait for May 2018.

Tuesday, August 15, 2017

Informed consent and evidence for GDPR -- BBC website

Really liked the way the BBC combines Informed consent and usability. The registration form is the source of a personal data for this particular process for them. Rather than link in the small print, buried in the bottom of the page, they give hints on the fields themselves and link to a Policy. Tooltips are available that can be collapsed or expanded as needed.

Also notice the two buttons about joining the newsletter: no default or pre-ticked boxes. An affirmative action has to be made, either way.

This technique has a further advantage as I see. When the user submits the information, the backend server can take a snapshot of all the data, including the

Note that the tooltips, in themselves, may not completely address all requirements. As a user, I still have to click on the links to understand what they are doing with my personal data.

"Security doesn't matter" & ethereum hacks

In the early 2000s, there was a famous article titled "IT doesn't matter". At the time, this was preposterous. It caused lots of controversy and I remember having several discussions until I realised people had not read the actual article.

The thesis of it was very simple: it doesn't matter because IT had become a basic function such as HR. IT was not a competitive advantage anymore. IT was a "vanishing advantage".

Security doesn't matter.
Just like IT, Security & Privacy is never an end in itself, but means to arrive to a goal. The goal is the mission of the organisation, put simply. It is, and always have been, for the vast majority of companies, more of a license to operate:

  • because its customers so require
  • because there are regulations to comply with
  • because it keeps the business running
Few business see Security as a need such as Capital. Banks is such a sector. They typically do not care about ISO 27001 certifications because the business-case of Security is intrinsic.

Security does not matter because it is becoming less and less of a competitive advantage. It is a necessary need (so to say) to do business and hence is becoming essential such as HR or Sales. More interesting, is that Security is stepping in to the customer front. To give an example, half of my time is not doing security as such but speaking to customers that want to be reassured we have good practices. Nowadays, for many, simply having a ISO 27001 certification is not enough. One has to show evidence of practices.

So the Ethereum hacks.
Considering this, it is just surprising that an Ethereum ICO could be hacked like this. Was there not USD 2k to get some advice and implement, for example, stop gap measures?

As Bruce Schneier says, the problem is not the smart contracts technology but rather the ecosystem around it.

Then another one, now with a small difference: they are an established wallet (browser integration) company and, my guess, almost everyone is a software developer. This typically sets an environment of sloppy security since Security is typically seen as a technical challenge such as password entropy. Security is much much more. Browse their website -- do we see anything about Security?

Yes, we do, after the 19 July. Is there a security statement explaining what lessons they learned. No. Hat do they do? They setup a bug bounty. Because Security is all about buffer overflows, 512-AES and WAFs, innit?

One of these days I will sit down in front of a pint and write "The Manual Securitist" -- how to fully protect and certify an organisation using only manual/human controls. I hope I never have to do this, but I am fully convinced all, say, CIS-20 or PCI/DSS  controls can be implemented, to a large extent, with technology of the 1960s.


Thursday, August 10, 2017

Challenges with IoT Security

I was asked to complete a survey on Challenges and Promises of the Internet of Things. Not surprisingly, my laser beams were pointing at Security and Privacy.

Here's how I defined, to myself, the methodology:
* google "IoT challenges" and skim through results for exactly 10 minutes
* get a pint and brainstorm with myself
* let thoughts freely flow for exactly 45 minutes
* 5 minutes to fix sentences and typos

Here's the result.

What are the chief obstacles and Challenges to IoT adoption?

§  Lack of overall security frameworks.
This is not an issue of IoT per se, but how everything is managed. For example, the SDLC is typically not secure, if any maintenance exists at all. There are two problems here: the actual hardware (updating the firmware) and the cloud supporting service. Please note the recent US IoT Security Act (draft).

§  Secure technologies
The building blocks of IoT, at the thing level, such as network protocols, micro-storage, IoT OSes, etc, are not secure enough. This is to be combined with a lack of security wrappers, that could mitigate the intrinsic lack of security.

§  Security complexity
Considering the many building blocks of a IoT products (hardware, network, servers, cloud, web, mobile app, etc.), it is virtually under all threats known to man. If a service is only a website, or a car, at least the security vectors are confined to that particular technology and implementation. Quite ironic, Things are small but they sit in a very large area.

§  Physical security
Physically securing a device is really hard and expensive. There are no off-the-shelf, cheap, readily available, solutions for that. Once it is physically breached, trust in the user breaks and trust in the Thing itself also breaks.

§  Security of software management
This is a special case but one of the biggest problems. It only becomes apparent with scale. On one hand, patching devices is not straightforward. On the other hand, installed software will take up more space every time. Space is however limited because updating the hardware will often be impossible or impractical. What will people do with legacy devices? Just like phones, they will keep using them, even those aware they are vulnerable to even “script kiddies”. Some 50% of all the phones in the world are trivially hackable.

§  Liability
If a Thing is compromised it becomes a liability for both the manufacturer/developer and the user. Today security liability is vastly undefined (the upcoming NIS directive will partially address this). Most DDoS-for-hire services use things (like cameras) for that.

§  Liability (2)
There are no robust Insurance products for Security. Fighting liability easily outweights revenues from the product itself. Preparing for that scenario (so to limit liability) is still in baby steps before robust compliance frameworks. From where I stand, it is truly a “do everything you can think of” because it will sooner than later be used in court. Any little step may be the difference between going out of business overnight with a penalty and getting a wrist slap.

§  Liability (3)
If someone's home or car is hacked via the Thing, the company will be liable up to, as is the case with connected cars, corporate crimes against life. Today not even medical devices are covered but I expect this to change up to home surveillance cameras used to watch a baby from a different room in a house.

§  Corporate Liability/compliance and governance
Security nowadays is based on risk assessments. A Thing made by a company with no established Security practices will never be able to do business with established companies (and even less with regulated). When they accept to manage the risks themselves, they will soon realise the cost of ownership is too high. There is plenty of lessons from the past.

§  Data Privacy issues
IoT have the potential to collect what seems to be, at first glance, information that is not problematic. Profiling then becomes an issue (see the recent Vacuum robot mapping homes and just reselling it - how naive). Things about the Quantified-Self is a good example. Excellent ideas, excellent products, (poor business models), but no idea of the ecosystem.

§  Unconvincing business models
The idea that an arduino and a front-end developer is enough. IoT always comes with the promise of scale but until then the underlying business models will always seem to me immature. Reason is the actual hardware + software costs seem to be considered central when they will, necessarily, become marginal. As an analogy, the cost of producing a car is 10% and 90% is maintaining the brand, operations, maintenance, parts supply chain, etc.

§  Unconvincing business models (2)
Many still think that collecting Data will, sooner or later, become an asset. Usually, it comes with little care to Privacy or regulations. Collecting Data just because it may be monetised in the future will basically become a liability.

§  Unconvincing business models (3)
The skill set of most IoT companies is ill-incentivised in my opinion. Hardware and front-end skills are only an enabler. Managing data, the product lifecycle and the supply chain is the right direction.

§  Lack of killer applications.
Which is surprising. So far, the best is smart meters – do their job, clear use-case, helpful, fully managed, belonging to the operator and not the user.

§  Lack of interoperability between IoT
This will be a long lasting one and likely never truly solved. Combined with interoperability, lack of usability is also a problem once a certain level is passed. Even today using a printer is a problem and there are countless academic solutions to autoconfigure a printer. We all get jammed paper and “cannot find printer” far too often. Further, combining all the data from several things in order to deliver value is, and always will be, a hard problem (no shortage of academic papers on it).

§  Things are physically clumsy
Cables and chargers…

Tuesday, August 8, 2017

UK legislative spree - now the NIS directive

The EU/GDPR is the hype of the day year but the Network and Information Systems (NIS) directive is the other side of it. The UK is now porting it (in compsci terms) to national legislation. The consultation is open on a number of issues.

Whereas they both, essentially, tackle Security of information services (is Privacy an aspect of Security or an area of its own?), there are differences with different implications at the governance level:

  • the GDPR is a regulation and is applicable straightaway, that is, there is no need to be ratified by national institutions. The NIS directive does need to be made a local law.
  • Whereas GDPR is all about Privacy, NIS is about Availability of services. An example if a water operator whose control systems were compromised and became unable to deliver cleanwater.
  • They both protect the common consumer and citizen but whereas GDPR directly protects the Privacy of consumers, NIS is more indirect as, I presume, action will be taken from the regulators themselves, and of each sector.
  • Penalties (still tbd) are still substantial but whereas GDPR can potentially kill a business overnight -- with fines up to 20M --, Utilities are much less vulnerable. First, Utilities are very large companies where provisioning for fines is always a mitigation option in the risk register; second, they are usually too big to fail so that an Essential Service provider cannot just be stopped overnight.
  • GDPR object is citizens-consumers. Operators of "Essential Services" is still to be defined (it is part of the consultation).

Sunday, August 6, 2017

UK DfT guidance on Security for Cars

The Department for Transportation (DfT) of the UK just released a guidance on Security for connected cars. This is part of the CNPI and the overall State strategy for Security. It comes when the US also is drafting new legislation with the IoT Cybersecurity Improvement Act.

My first thought was we officially entered an era where products are regulated for Security just like food has been for decades. This is most welcome. Just a few months ago I bought a cheap surveillance camera and while trying to secure it well beyond the (ridiculous) level from factory, I realised it was running the same firmware that Mirai was exploiting.

This would not really be a problem because they all run some custom Linux and it never takes too long to get access to it to try to harden it. Where the firmware stopped, a firewall, some logging or even a blind would do the trick. Problem is I really do not want to spend hours doing this for any device and keeping up with all vulnerabilities, botnets and whatnots just for one of the devices that costed £30. Trying to update the firmware only showed that nothing had been released in months.

The automotive industry and the connected gadget one have very different problems. For example, whereas car manufacturers have a reputation to protect and resources to maintain firmware, distribution/updating is complex. For connected gadgets, reputation for an Amazon seller means nothing and cost is the primary driver. Maintaining software or having a customer service easily drains the razor-thin margins, at their current operating models (it does not need to be so).

These regulations are one confident step to change everything. In the US, one of the aspects of the Act seem to be the obligation to release software updates. In the UK, one of the principles is accountability of the board and a requirement for post-sale and "product after-care".

Tuesday, June 20, 2017

amazon EBS breaches - from report to leak

I first read this article on the 5th of June. Yesterday, 19June, I read this one after a report from Upguard.

The original release does not explain in detail how the records were discoverable but there's no coincidences. The moment I read about it I thought about that first article about permissions and storing large sets of data in a cloud service while not caring for permissions. Expect more.

A few considerations
* this is mostly Americans, but had it happened in the EU and by next year, the party would be bankrupt [memo to self: does GDPR apply to political parties?]

* the same way a hard disk is just a tool to store information, it sits passively on the security chain. A hard disk has no notion of permissions. It is the OS and controls around it (such as encryption) that secures it against unauthorised access. Nobody, unless very technical, should have access to the hardware directly. A cloud storage service works the same way. Directly manipulating data so sensitive (even ethnicity and religion, the worst possible kind of leak) should never be done in bulk as it is half way to loose track of it. The proven way of handling personal data is by understanding how it flows and, at each processing point, who has access to it and where is lies.

Saturday, May 6, 2017 breach -- PCI compliance

A few weeks ago a new major breach was known. It turns out there was a server with a database of 100 000+ customer records that even included credit card numbers. Apparently, the server was sitting in plain sight with no protection whatsoever (no username or password).

This is just unbelievable. Whereas i am not surprised with personal information, the credit card numbers do puzzle me.

In order to offer payments, the company needs to comply with PCI/DSS. It comes in different levels and a major line is the number of transactions and whether the credit card numbers themselves are stored: 
  • 100k is enough for them to, in normal circumstances, have an external audit and regular pentesting to their external interfaces
  • storing credit cards is MAJOR. It draws the line between a really simple compliance programme and a really complicated one. I am simplifying but this is the main reason why receipts have all the asterisks with only a few digits of the credit showing. The rule is: process the payment and forget the credit card details.
How was it possible that a server with this information was just accessible without even a password?

Two explanations. One is they lied, at multiple locations, in their self-assessment and cheated when audited.

The other is far simpler: this is forgotten server that was used for, e.g., backups.

PCI security has been doing wonders for security for years now. However, it is very prescriptive and can more or less easily be cheated and made quite inconsistent, unlike ISO 27001, for example, that focus on the management of security.

Again, I praise PCI for what they have done but one should always keep in mind the programme is a trade-off between simplicity and adherence. If the programme was too complex, the vast majority of companies (small) would not dare accept payments. So it sort of follows a 80/20 rule similar to what the UK is doing with its 10-step programme: better to have "lots" of security, even if not comprehensive than requiring everything and the moon and nobody doing it.

Friday, December 30, 2016

the UK and GDPR

There were already strong indications that the UK would fully adopt GDPR even after Brexit. A recent document (21-dec) has however come out that further clears any doubts especially considering Britain is halfway the door.

This document is penned by the Dept of Culture, Media and Sports. It starts by saying
Government will (...) improve cyber risk management (...) through its implementation of  (...) GDPR.

 Then it closes any discussion with

For now, Government will not seek to pursue further general cyber security regulation for the wider economy over and above the GDPR.
Which, in a sense, is a remarkable statement since GDPR is not, strictly speaking, about cyber security. It embeds and enforces cyber security but from a narrow angle of Personal Information.

Tuesday, October 18, 2016

GDPR and ransomware

Even if Brexit comes to change everything, a lot will pretty much stay the same and the EU's GDPR (or just google it), that everybody will have to comply with by 2018, is just a good example.

I am not a legalist so I am missing a few things for sure. What I am not missing is the cyber security impact of GDPR. Now that it has been aproved, all sorts of implications are coming out.

One of them is about ransomware and the PCI council makes an interesting remark: if your company gets hit by ransomware, and you were not prepared (e.g., good backups and protected/offline backups) you will probably be advised to pay up because at least you have some chances of getting documents back. The trouble is ransomware will likely to be the least of your costs: the GDPR plans to fine businesses up to 4% of revenues upon a breach if your company fails to properly explain there was nothing, within reason, you could have done.

Interesting is that cyber security for GDPR is in effect inefficient if the idea is to specifically address compliance. My recommendation would be this: implement a broad cyber security programme on information security and then specifically trim and adjust it, via special gap and risk assessments, t GDPR. incident - very good incident response

Amongst others, provides online bitcoin wallets which makes it an atractive target. Last week, something happened, fairly outside its reach, but that had an exemplary response even if it was taking down the whole service for a day. Cannot blame and I applaud

News say the registrar (whois results here) was breached and can only accept the risk.

Attackers somehow managed to change the DNS records of the domain. The trick, I presume, is to redirect users to a specially crafted website with the same design and trick the users into typing in credentials.
Interestingly, a user on reddit posted an alert just 1h or so after it happened. I asked how he found out and he told me that he spotted it by accident when his application (that pulls data from was reporting errors. Most likely, the errors and warnings were due to the self-signed certificates the attacker was using. Firefox and Chrome will be very zelous about this and request a multi-click authorisation from the user.

Practical highlights:
  • even if your budget is low, monitor your security. There is a lot one can do with off-the-shelf tools or open-srouce or - even - with good ol'scripting.
  • have an incident response plan, document and distribute to everyone. If resources are lacking just do what did: pull the plug as soon something happens that looks minimally serious. Then investigate, document and use it to add more controls, technical or process, to your cybersec framework..
  • once more, it shows that HTTPS helps in more than one way. Buy certificates from a well-known CA and use it generously. The main browsers will cooperate - and often stop it there by alerting the user - and are intensely pushing a https-only web.
  • if possible, making sense and being practical, insist that your users adopt 2FA. In this case, it would have stopped breaches if combined, for example, with a single-active-session policy on's servers. 
  • interestingly, if you have software-based service on which others rely, take advantage of your users as all have the incentive. Leverage this crowd-sourced tool and implement an action in a new guideline. This user on reddit seems to have caught really fast and probably faster than a SIEM could have caught especially if you do not monitor that side of your infrastructure.

Monday, August 29, 2016

£500/month could have saved Ashley Madison

The Ashley Madison's incident is still a source of very interesting news. The dating website for organised shady affairs (no judgement whatsoever) went to an Australian court and the hearing is public. This was the first time I saw a court do a risk assessment.

The report shows that Ashley Madison failed on its obligations to provide protection for the user data that, needless to explain, was highly personal.

It does not surprise me to known that they shared passwords on a Google drive or there was no multi-factor authentication when accessing their systems remotely such as from a public location. Overall, security at 80% is about good technical controls that do not need really a Cyber Security Office. They do need, however, at some point, guidance from a security professional.

The only thing I have to say is that I am sure in Canada they would find someone that, for some £500/mo, would be able to act as the interim CISO and stop them from having really bad practices.

This is basically my offer to many small companies or charities: let me spend 3~6 months in your company, do a nice gap assessment, align a report with some relevant framework and have the company implement and get started with Cybser Security. From then on, it's a matter of keeping it going with its own resources and some part-time steering. Yu get even a one-man SOC that will lookour for obvious signs of compromise and issue alerts on major vulnerabilities and mitigation actions. Hence the round figure of £500/month.

80% of security, in my opinion, is low hanging fruit, especially when the company has so many technical resources easy to train and discipline. Good policies and guidelines is half way there.
It is not enough to get ISO27k certified, but it is certainly enough to cover all the gaps the report details and much more.

Sunday, August 21, 2016

How Secure Software Development fits in a company

This is an interesting article about the importance of Secure Software Development. The title says it all: "To Protect Enterprise Data, Secure the Code".

I could not agree more; however, I am not sure I fully agree with the suggestions. It is suggested, above all, that Developers should take on the burden. Moreover, the article sort of suggests that securing the code is central to securing the enterprise.

I have been recently speaking to a highly innovative company in the UK whose main product is Software based. In a nutshell, they develop all-software contact centre solutions around which companies can keep in touch with customers over a plethora of channels. As such, secure code is of paramount importance.

Question is, this is not enough and, more than suggesting a 4-fold approach to a cyber security programme, I would never put the burden on secure coding on the developers.

The 4 work packages I have suggested should be obvious:
  • the internal organisation operations
  • the product delivery plan -- for the case of installing their products in the client but having no further responsibility over operations after a signoff
  • secure software development lifecycle (S-SDLC)
  • secure hosted operations
This together does provide 360-degrees cyber security. There are a lot of overlaps and synergies so it does not necessarily mean 4 independent programmes.

As for the role of developers, they should keep doing what they are doing now -- but the whole project delivery needs to be adjusted. Quite critical is to engage the developers in Risk Assessments and Table-Top Exercises before and after implementation. Engaging with 3rd parties (such as pestesting or code analysis) or having a non-developer (the CTO if needed) to do code sanitisation are key steps that do not need to interfere with the normal, and always personal, development style.

Overall, most of implementing a Secure-SDLC can be done around the current practices and free-style of each developer. Key steps to achieve this is by inserting extra steps in their release plan -- so to prevent design/implementation flaws --, engaging with 3rd party services, designing policies that enable the cyber security office to vet a release with adequate checks and reserving some resources for software maintenance at the security level.

My message being: do not drop the onus of security on the developers: let tehm freely work, add security as just another well-defined/deliverable requirement and build the rest around them so they can focus on the key functionality.

Saturday, April 23, 2016

Shapeshift hack (a Bitcoin service)

(edited) is a startup revolving around Bitcoin (one of my lateral interests and a movement I follow quite closely). Last week they reported a coins having been stolen. More than that, Eric Voorhees writes a fascinating report of how it happened. It is a story I will be using in many talks

My first reaction, shared in a reddit post, is that they actually didn't do anything fundamentally wrong. They're a startup so getting the business up and running is the goal. This means they have no cybersecurity office and, worst of all, they are all tech people which unfortunately gives a stronger sense of "we don't need a cybersecurity programme because we have firewalls". I have been working with tech startups with an immensely skilled army of developers and managers but that show quite an alarming unawareness of many basic concepts of cybersecurity.

As I often say, cybersecurity is 20% about firewalls and 80% about organisational processes. In ths case, what failed was the human element:
  • do not leave computers unlocked
  • do extensive background checks on new starts 
But who has never left a laptop open and logged in? I keep doing it even on public places. And how thorough and reassuring can background checks be?

There are, of course, many tactical improvements possible: secured critical operations, segregation and air gaps of critical assets, much clearer/crisper separation of duties, much much better auditing, much much much better accounting, etc. Beyond making the system harder to exploit, above all they would mak eit easier to understand what happened and much faster.

Sharing the story, with care not to reveal too much, was a good thing to do in my opinion. I do have a few unanswered questions but I also feel it was an honest report. It assured customers: everyone will be hacked at some point and cybersecurity is mostly about minimising damage to the least (reasonable) extent when it happens and not preventing it.

The fact that their source code is on the lose is alarming though. They should subject it to a thorough analysis (by a 3rd party!) and setup a bug bounty programme. They are a business that relies on exposure to the public internet and I can only imagine how many people are trying to exploit it.

Finally, in the words of Eric Vorhees, there is also this valuable lesson so well formulated:
Though it sounds cliché, (...), do yourself a favor and bring in 3rd party professional help very early. We hadn’t needed it at first, because we were small. But growth creeps up on you, and before you know it you are securing significant assets with sub-standard methods.