Posts

Showing posts from 2016

Ubuntu on Dell Latitude E6420 with NVidia and Broadcom

Image
My company sold old laptops to employees and I decided to use the chance to get an affordable and legally licensed Windows 10 system - a Dell Latitude E6420 . Unfortunately the system has a Broadcom Wifi card and also ships with an NVidia graphics card which require extra work on Ubuntu 16.04 Xenial Xerus . After some manual configuration the system works quite well with a power consumption of about 10-15W while writing this blog article. Switching between the Intel and the NVidia graphics card is simple (with a GUI program and requires a logout-login), for most use cases I don't need the NVidia card in any case. Windows 10 also works well, although it does not support all devices. However, the combined NVidia / Intel graphics systems works better on Windows than on Linux. In detail, I took the following steps to install an Ubuntu 16.04 and Windows 10 dual boot system. Step-by-Step Installation Requirements Either a wired network connection or a USB wifi dongle that

Lifting the Curse of Static Credentials

Image
Summary:  Use digital identities, trust relationship and access control lists instead of passwords. In the cloud, this is really easy. I strongly believe that static credentials are one of the biggest hazards in modern IT environments. Most information security incidents are somehow related to lost or leaked or guessed static credentials, Instagram's Million Dollar Bug is just one very nice example. Static credentials can be used by anyone who has them - friend or foe are typically very short and can even be brute forced or guessed for machine or service users have to be stored in configuration files from where they can be leaked are hard to remember for humans so that they will write them down somewhere or store them in files typically stay the same over a long period of time don't include any information about the identity of the bearer or user are hard to rotate on a regular base because the change has to happen in several places at the same time All th

CoreOS Fest 2016 - Container are production ready!

Image
The CoreOS Fest 2016 in Berlin impressed me very much: A small Open Source company organizes a 2 day conference around their Open Source tools and even flies in a lot of their employees from San Francisco. A win both for Open Source and for Berlin. And CoreOS also announced that they got new funding of $28M : Alex Polvi , CEO of CoreOS More interesting for IT people everywhere is the message one can learn here: Container technologies are ready for production. There is a healthy environment of Open Source solutions: Kubernetes , Mesosphere DC/OS , containerd , even OpenStack and others. Commercial editions with vendor support like Tectonic  (CoreOS), Mesosphere Enterprise or Hashicorp Atlas 3rd party tools solving common problems like persistent storage: StorageOS  and  Quobyte In fact, choosing the "right" platform starts to become the main problem for those who still run on traditional Virtualization platforms. On the other hand, IT companies who don't

OSDC 2016 - Hybrid Cloud

Image
The Open Source Data Center Conference 2016 is a good measure for how the industry changes. Compared to 2014 Cloud topics take more and more space. Both how to build your own on-premise cloud with Mesos , CoreOS or Kubernetes but also how to use the public Cloud. Maybe not surprising, I used the conference to present my own findings from 2 years of Cloud migration at ImmobilienScout24 : After we first tried to find  way to quickly migrate our data centers into the Cloud we now see that a hybrid approach works better. Data center and cloud are both valued platforms and we will optimize the costs between them. Hybrid Cloud - A Cloud Migration Strategy Do you use Cloud? Why? What about the 15 year legacy of your data center? How many Enterprise vendors tried to sell you their "Hybrid Cloud" solution? What actually is a Hybrid Cloud? Cloud computing is not just a new way of running servers or Docker containers. The interesting part of any Cloud offering are mana

You can't control internal public data

Image
Everywhere there is some data that is relevant either for all applications or for many applications in different parts of the platform. The "obvious" solution to this problem is to make such data internally public or world-readable , meaning that the entire platform can read it. The "obvious" solution to security in this case is actually having no security  beyond ensuring the "are you part of us?" question. Common implementations of this pattern are world-readable NFS shares, S3 buckets readable by all "our" AWS accounts, HTTP APIs that use the client IP as their sole access control mechanism etc. This is approach is really dangerous and should be used with care. The risks include: You most likely don't know who actually needs the data and who not. If you ever need to restrict access you will have a very long and tedious job ahead of you. You don't know who accessed the data for which purpose. After a data leak, yo

Go Faster - DevOps & Microservices

Image
At the microXchg 2016 last week Fred George - who takes pride having been called a hand grenade - gave a very inspiring talk about how all the things that we do right now have one primary goal: Go Faster Reducing cycle time for deployments, automation everywhere, down-sizing services to "microservices", building resilient and fault-tolerant platforms and more are all facets of a bigger journey: Provide value faster and find out faster what works and what not. DevOps DevOps is seen by most developers as beeing an Ops movement to catch on with developers before their jobs become obsolete. Attending various DevOps Days in Germany and the USA, the developers who where also there always complained about the lack of developers and the lack of developer topics. They observed that the conference seems to be by and for Ops people. Consequently, DevOps conferences usually have two tracks: Methods and Tools. Methods teach us how to do "proper" software develop

Cloud Migration ≈ Microservices Migration

Image
Day two at the microXchg 2016 conference. After listening to yet another talk detailing the pitfalls and dangers of "doing it wrong" I see more and more similarities between the Cloud migration at ImmobilienScout24 and the microservices journey that most speakers present. The Cloud migration moves us from a large data center into many smaller AWS accounts. A (legacy) monolithic application is cut into many smaller microservices. Internal data center communication becomes exposed communication between different AWS accounts and VPCs. Internal function calls are replaced with remote API calls. Both require much more attention to security, necessitate an authentication framework and add significant latency to the platform. A failed data center takes down the entire platform while a failed AWS account will only take down some function. An uncaught exception will crash the entire monolith while a crashed microservice will leave the others running undisturbed. Interna

AWS Account Right-Sizing

Image
Today I was attending the Microxchg 2016 conference in Berlin. I suddenly realized that going to the cloud allows to ask completely new questions that are impossible to ask in the data center. One such question is this: What is the optimum size for a data center?  Microservices are all about downsizing - and in the cloud we can and should downsize the data center! In the world of physical data centers the question is usually goverened by two factors: Ensuring service availability by having at least two  physical data centers. Packing as much hardware into as little space as possible to keep the costs in check. As long as we are smaller than the average Internet giant there is no point to ask about the optimum size. The tooling which we build has to be designed for both large data centers and for having more than one. But in the "1, 2, many" series "2" is just the worst place to be. It entails all the disadvantages of "more than 1" without any o