Cloud computing with OpenShift
OpenShift and other PaaS products can help take some of the work out of deploying and managing systems.
Cloud computing security is one of my favorite subjects. Disclosure: It's actually my day job now at Red Hat, dealing with products like OpenShift (PaaS), OpenStack (IaaS), CloudForms (Orchestration), and so on. Please note for the purposes of this article, I'm largely going to ignore public clouds like AWS and OpenShift Online and focus instead on the on-premises side of cloud. Why? Well, back in the day, administrators used to deploy physical servers and thought that was great. Then, virtualization came along, and admins realized that deploying physical servers was a chore and that virtualization was the way to go.
Now, folks are moving into infrastructure as a service (IaaS) and deploying OpenStack internally, and being able to create a Heat template to deploy new systems in minutes is even easier! IaaS, however, is not the end-all. Platform as a service (PaaS) lets you largely ignore the operating system and network layers; ideally, you can specify something like "this application requires a web server, say, Ruby and Ruby on Rails for application, and some back-end data storage, so I'll go with MongoDB and memcached," and be done.
Often, application authors don't care, or want to care, about the underlying operating system or tuning things like the database server (e.g., one user might have a dedicated one, and another might be using a shared instance). PaaS lets application authors avoid these details, leaving them up to the PaaS layer.
I will admit, I'm biased when it comes to PaaS: My preference is OpenShift. For one thing, it's what Red Hat ships, but it's also one of the few PaaS offerings that is open source and available for download (unlike Heroku or Google App Engine). It also supports a wide variety of programming languages and services. In fact, OpenShift has an open cartridge specification; if your underlying platform (RHEL, CentOS, Debian, whatever) supports it, you can quickly wrap it in a cartridge for OpenShift and use it.
Do you need Fortran support? Cobol? Some obscure application to convert PDF files into haiku? OpenShift can do that . This approach obviates a lot of potential problems; moving existing applications into PaaS can be a problem if the programming language is not supported or if a component requires something the PaaS does not provide. Moving things out of an existing PaaS is also a worry: If you pick a PaaS vendor that does not provide an open source version of the PaaS, then you'll have a pile of specialized code and services that will take some time and effort to convert into something you can use on a different platform .
With OpenShift, you can simply buy it from Red Hat or grab the free version (OpenShift Origin ) and deploy your own instance. Remember, one leg of the security triad is availability, and if you're locked into a platform, that's a potential risk that can really hurt.
SELinux vs. LXC
One of the biggest security decisions OpenShift faced was how to segment users. Many PaaS systems take one of two approaches: either they create separate virtual services or instances using something like LXC, or they offer only a limited and "safe" subset of capabilities (e.g., no local storage, no shell access). The first approach of using virtual servers is problematic for performance; if you have 10 users, that means 10 running copies of the Linux kernel and all the userspace stuff needed to support them.
The second approach, of course, can be very frustrating for developers and users; you need to architect your application around the limitations of the PaaS. OpenShift opted for a third option: using SELinux to secure the system so that users can't attack each other, and using cgroups to prevent abuse of resources. The advantage of using SELinux is that you have one running kernel, and you can even use shared memory segments. So, if you have 10 identical copies of Apache HTTPD running, it doesn't consume a huge amount of memory, and the same is true for other services, like MySQL, Ruby on Rails, and so on.
OpenShift also assigns each account its own local network space, so if you run a copy of MySQL, for example, only applications within that gear (which is a collection of cartridges and software that runs within the same user account on the same server) within your account can access it. Other accounts and gears cannot. This approach in turn allows OpenShift to give users their own copies of MySQL or Apache. OpenShift does not need to rely on a service supporting multiple users securely. So, for example, when an authentication bypass was found in MySQL, although OpenShift was affected, there was no way to exploit it directly. An attacker would first need to compromise the web application, for example, and then attempt to compromise MySQL .
Running Untrusted Code
In general, everyone likes to pretend that the software they're running is safe and "secure" if they keep it up to date. The reality is that software has publicly known flaws that haven't been fixed yet and almost certainly has privately known flaws that have not been fixed yet either.
One way to deal with this situation is to run the software within some sort of container, such as chroot, or an entire virtual private server. The challenge lies in the overhead of doing so and the cost of setup, especially when you're defining allowed communications channels. Almost all modern software needs an input of data and an output of results to be useful, and, in most cases, like a Bash command line, admins tend to string programs together to do really useful things.
A framework like WordPress or Drupal is great, but you probably also want to generate nicely formatted documents, so a PDF converter is a good idea. Additionally, you want to email those documents out to people, and you want to talk to some third-party payment system so that customers can pay you. With OpenShift, you have two main options: You can place the various pieces that need to talk to each other within the same OpenShift gear, or, if you want to separate things further, you can have multiple gears and allow the software to talk to each other over the network. The good news is that many things now include a REST API, so talking to components over the network is becoming standard and allows components to be isolated more easily.
Buy this article as PDF
Azure CTO says Redmond has already considered the unthinkable.
Lead developer quells rumors that the Debian version is slated for center stage.
MSBuild is now just another GitHub project as Redmond continues its path to the light.
Malware could pass data and commands between disconnected computers without leaving a trace on the network.
New rules emphasize collegiality in coding.
Upstart lands in the dust bin as a new era begins for Linux.
HP's annual Cyber Risk report offers a bleak look at the state of IT.
But what do the big numbers really mean?
.NET Core execution engine is the basis for cross-platform .NET implementations.
The Xnote trojan hides itself on the target system and will launch a variety of attacks on command.