Now available: RHEL Atomic Host Beta

Latest response

Red Hat Enterprise Linux Atomic Host is now ready to download and test as a beta.

Red Hat Enterprise Linux Atomic Host is a secure, lightweight and minimized footprint operating system that is optimized to run Linux Containers. A member of the Red Hat Enterprise Linux family, Red Hat Enterprise Linux Atomic Host couples the flexible, lightweight and modular capabilities of Linux Containers with the reliability and security of Red Hat Enterprise Linux in a reduced image size.

For more details on RHEL Atomic Host, check out the main product page.

When you've tried it out, let us know what you think here.

Responses

We'd like to understand the types of applications customers are looking to run on RHEL Atomic Host as well as the number of applications they envision being hosted on this new deployment type.

We run many JBoss instances per host that would benefit from being containerised, we also have smaller standalone java apps as candidates as well. Apart from ease of container deployment, being able to easily place resource limits on each JBoss instance is a plus. Simple cluster elasticity is what's attractive.

I've spent the day running through the Kubernetes orchestration how-to from the product page without success, all is good until 'kubectl create', pod is defined but not deployed to minion. Pushing on...

Mark, thanks for providing your use case -- indeed containerizing JBoss is something we are tracking as well. Feel free to provide additional details on what's breaking with 'kubectl create'. We will also ensure our docs are accurate as well.

Hi all,
we started checking the feasibility of RHEL Atomic + Kubernetes as a solution for a rapid deploy of test environments for our customer, based on docker containers. After a first installation of 1 RHEL (as kubernetes master with private docker registry) and 2 RHEL Atomic (as kubernetes minions), I was impressed for the great potential of Atomic also as a solution for system patching on large (more that 3000 Linux instances) companies.

So generally I am impressed with Atomic, really.

Unfortunately analizing the problems I got configuring kubernates (basically the 2 minions do not communicate with the master) it was impossible to make a serious troubleshoting because of some tools like tcpdump are missing ...

Is it correct not to include these tools in the Atomic host ISO?
If yes ... is it possible to install them later?
I understand that in the Atomic perspective I should relay on the[rpm]ostree utilities ... but how?

Also the missing iptables-service and firewalld is it 'by design' or a simple 'youth problem'?
I find difficult to think about an enterprise system without them ...

Regards
Giampiero

Giampiero,

Thanks for your feedback on RHEL Atomic Host, and I'm glad to here that the potential of Atomic impresses you. Including additional RPMs into the Atomic compose process is a tradeoff between providing all the functionality you are used to in RHEL versus keeping the image size small and running additional tools in privileged containers. We will take a look at the tools you mention and see where they fit in the architecture. We want to look into the Kubernetes issue you can ran into. Can you provide more information on the issue?

Regards,
Bhavna

it was
impossible to make a serious troubleshoting because of some tools like
tcpdump are missing ...

It's possible to run this as a --net=host privileged container:

Run tcpdump in a container, printing TCP SYN and FIN packets

docker run --net=host -ti rhel /bin/sh -c "yum -y install tcpdump; tcpdump -i wlp3s0 'tcp and tcp[tcpflags] & (tcp-syn|tcp-fin) != 0'"

We will look into iptable-service and firewalld

Thanks,
Bhavna Sarathy

Hi Bhavna,
I just re-installed one of the 2 atomic with another RHEL-7 and reconfigured it similarly to the atomic one in order to test better the config.

Now the master sees both the minions and I'm able to run / schedule a pod inside the cluster.

I suspect that the cause was in the apiserver configuration file. If I follow the redhat example writing an host alias inside the KUBE_MASTER statement like this (master is 'my' alias for the kubernetes master) :

KUBE_MASTER="--master=master:8080"

I get the following:

Nov 25 14:53:42 test-l06 kube-scheduler: E1125 14:53:42.544839 06178 reflector.go:79] Failed to list *api.Pod: Get master:8080?fields=DesiredState.Host%21%3D: unsupported protocol scheme "master"

If I write the IP address directly things are working.

What is still not working - in case of multi-container pod - is the possibility for the frontend container to connect the backend one.

As Eric wrote in his post it is likely that 'my' netwok environment is not properly configured ... in that case could be really interesting to have a working example of a cluster with 2 minions and a multicontainer for which the network environment is working as expected.

Regards
Giampiero

I don't know what your problem is, but I can tell you some of the common bugs people run into with kube, and we are working on:

  1. I don't remember if --machines is part of the apiserver or the controller-manager (used to be apiserver, has changed to controller-manager recently). Either way, --machines handling is buggy in that release. Only the first time you try to bring up the cluster there can be some trouble. etcd must be running before that --machines daemon is started. The minions need to be up and reachable from the master machine before that daemon is started. "reachable" means that the master can use "curl -s -L http://machine-ip:10250/healthz" and get back "ok".

  2. I've also seen people who didn't set --hostname-override on their kubelet configs. That needs to be set to the ip address that the master was told in --machines.

  3. The third common problem is when people try to do multi machine setups they don't have a working underlying networking configuration. https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/design/networking.md This is one of the reasons we only describe the beta as single host.

Ciao Eric,

I have now a - mostly - working cluster. As for your hints what I experimented is the the apiserver config has some problem with parsing. As already replied to Bhavna, it seems that using the direct dot IP make it work.

As for your point 3, I'm really interested in having a working example of what Docker+Atomic+Kubernetes can do ... with 2 minions :)

Is it possible to have some working config for the network with 2 nodes/minions?

I experimented also some problem with etcd cache: sometimes the master and the minion running a pod are not aligned ... the only solution is to delete etcd cache ....

Regards
Giampiero

So the etcd cache issue, I'm not sure what you are referring too. Collect as much info as you can and file a BZ. We'll definitely take a look.

The networking requirement is simple. Kubelet needs to be able to start docker containers and whatever IP docker gives those pods needs to be able to reach any other pod on any other machine by IP. Really this is a docker networking thing more than a kube networking thing....

We've prototyped and used numerous different ways to accomplish network connectivity between arbitrary docker containers. If you check out https://github.com/eparis/kubernetes-ansible you can see some of my attempts.

1) static addressing with static routes (only works if all machines on same L2 network)
2) static addressing with GRE tunnel encapsulation between OVS bridges (removes the L2 network requirement)
3) flannel for dynamic address assignment and vxlan overlay networking

Obviously the first 2 are the most feasible options on top of the atomic beta, but the 3rd one shows great flexibility if you are looking at platforms where you can easily install software.

Did Cockpit make it into this release?

Cockpit is not included by default in this release, but you can try using a quick workaround by RH's Stef Walter: Cockpit on RHEL Atomic Beta.

Is there a plan to include Cockpit in future releases?

After watching the Atomic/Openshift talk at LCA2015 by Steven Pousty, it would appear that there is some overlap in the goals of Cockpit and Openshift... is this the case?

We are looking at Cockpit to flush out some manageability interfaces at this time.

With respect to OpenShift - there is no overlap - Openshift v3 will provide a full workflow, development and production environment for a container PaaS. Cockpit is looking at a hub/spoke management interface for the infrastructure elements.

RHEL Atomic Host documentation linked from the landing page
https://access.redhat.com/products/red-hat-enterprise-linux/atomic-host-beta

We have made documentation available from the landing page for all the deployment options (baremetal, VM, cloud)

Direct link:
https://access.redhat.com/articles/rhel-atomic-documentation

Getting Started with Red Hat Enterprise Linux Atomic Host
Red Hat Enterprise Linux Atomic Host -- Anaconda Installation Guide
Red Hat Enterprise Linux Atomic Host -- PXE Installation Guide
Red Hat Enterprise Linux Atomic Host -- Kickstart Installation
Red Hat Enterprise Linux Atomic Host -- Linux hypervisor Installation with qcow Media
Using Red Hat Enterprise Linux Atomic Host in Red Hat Enterprise Virtualization Environment
Using Red Hat Enterprise Linux Atomic Host on Red Hat Enterprise Linux OpenStack Platform
Using Red Hat Enterprise Linux Atomic Host with Google Compute Engine
Using Red Hat Enterprise Linux Atomic Host with Amazon Web Services
Using Red Hat Enterprise Linux Atomic Host in VMware
Frequently Asked Questions about cloud-init

Get Started with Docker Containers in RHEL 7 and RHEL Atomic
Get Started Orchestrating Docker Containers with Kubernetes

Please check it out give us feedback on the documentation.

Regards,
Bhavna Sarathy
Senior Product Manager

RHEL Blog - Top 7 Reasons to Use Red Hat Enterprise Linux Atomic Host

http://rhelblog.redhat.com/2014/12/04/top-7-reasons-to-use-red-hat-enterprise-linux-atomic-host/

Bhavna,

Can you provide some documentation / resource for rpm-ostree?
4. Atomic Updating and Rollback

I understand the concepts from reading upstream project pages (most of which now 404), but do Red Hat have documentation for pushing out new OSTree and rollback in Atomic?

-edit-

Found it (for anyone else looking):
https://access.redhat.com/articles/rhel-atomic-getting-started#upgrade

I notice that this process requires a full reboot of the host to boot into the new ostree.

As it stands, the ostree patching may be more streamlined, but for a container host, having to reboot to upgrade feels like a regression. In the 'old world' you would only need to reboot the host for kernel updates (and not even that if you have live kernel patching).

I appreciate that the 'old world' doesn't give you rollback, but I'm still interested in the impact of the 'new' approach. Can someone provide a real world example of how this process will work if you're running hundreds of containers on the host? is there an expectation that the running containers are cattle and should be destroyed/redeployed on a different host?

Thanks for your astute observations, couple of points:

1) While ostree requires a reboot today, the architecture will allow it to evolve partial live updates (while still preserving the rollback ability)
2) Indeed, this model works best when the containers are cattle - you want to do rolling reboots where things like webserver pods are spread across hosts, and you only reboot a portion of them at a time. The Kubernetes scheduler should take care of ensuring you have enough replicas started.

Best regards,
Bhavna Sarathy

Bhavna,

Thanks for the feedback.

Do you know if there are plans to incorporate an ostree capability in mainline RHEL 7? I notice some subscription manager components relating to ostree made it into the RHEL 7.1 beta.

I see an immediate application for this technology in some 'cattle' VMs/images (rather than the host).

Sorry, looks like I missed this question. We first want to test out the ostree technology in RHEL Atomic Host and see if it becomes widely popular. It takes a little while for a new deployment model to gain popularity. Since the packages are common between RHEL and RHEL Atomic Host, the updates we did for atomic also went into RHEL, different code paths

Can someone tell me where to download the installation media for Atomic Host Beta? I follow the link in the Getting Start Guide to the product page https://access.redhat.com/products/red-hat-enterprise-linux/atomic-host-beta. When I click Download button it brings me to the RHEL 7 download page. Nothing related to Atomic.

Do I need special permission to try the beta? Please help.

Hi Eric,

You need an active (RHEL) subscription in order to be able to download the Atomic Host beta (and you need to be logged in to the Customer Portal). If you don't have a subscription yet, you can try an evaluation.

If you do have a subscription, you can get the list of available images on the download page by selecting "Red Hat Enterprise Linux Atomic Host Beta" from the "Product Variant" drop-down menu.

Hi Eric - you may also find this link helpful: https://access.redhat.com/articles/1287233 , which describes who has access to the RHEL Atomic downloads

Greetings,

During the RHEL Atomic Host Beta program, we have plans to provide 1-2 Atomic updates to our subscribed customers. We released the first atomic update on Friday 12/12 with docker-1.3.2-4 package and several other updates. High level summary of changes:

Installed tree is now around 820MB, unnecessary packages stripped
git is removed due to the Perl dependency
Newer versions of Kubernetes and Docker
Rollup of asynchronous kernel errata

Please update the beta image with an updated atomic tree. This allows us to test our infrastructure and process that we will be using post-GA to provide update to our customers.

Best regards,
Bhavna Sarathy
Senior Product Manager
Red Hat Enterprise Linux

What sort of licensing is needed in order to be able to locally build images based on RHEL 7 official base image? By "locally" I mean something like a Windows workstation running an Atomic VM.

My servers are registered (Satellite) and probably can do it naturally. I assume that "yum" commands in a Dockerfile will require some sort of subscription from the host in order to work, but I haven't tested it yet (been using CentOS until now). I also think that a Developer Subscription is a pricey overkill.

Created a new topic for this question, please ignore.

HI, I would like to check whether RHEL Atomic can help our solutions. our solutions is having an application used to receive an request from client on a dedicated UDP/TCP port and then process checking on its database, either it is a PGSQL or file based database, then send reply back to client. our question, can we make sure the RGEL atomic host, such that we can running the application in different container, such that they are independent to each others. the application can be listened on different port for different container.