IPAM and you.. tell me your stories
IP Address Management appears to be something that came and went from CIO buzzword bingo.. but i'm not convinced the problem is completely solved (or addressed) in many enterprises, so I am keen to hear what people are doing or what they have seen in regards to IPAM in environments they manage.
So I thought I would put together some questions to spark the discussion, especially if you are using Red Hat in your IPAM solution. Also interested to know generally what people are using for IP allocation, particularly on their Red Hat server fleet.
I am expecting answers such as (with more detail):
- We use DHCP and cross our fingers
- We use DHCP with manually configured reservations
- We ping IPs in the subnet until an address fails to respond (Bingo!)
- We email the network guys and they send back a spare IP
- We have an elaborate Excel spreadsheet/wiki page
- We use an IPAM product/appliance but allocation is a manual process
- We use an IPAM product/appliance and allocation, DHCP, DNS etc. is all automated and host register through an API on build
- We use a bespoke / in house application
Further to this...
do you use DHCP for server IP allocation?
do you use the same IPAM solution for both servers and desktops?
do you only use your IPAM solution for 'cloud' or 'cattle' segments of your network?
do you use/can you recommend an opensource product you have had success with? (eg. http://phpipam.net/)
can you recommend a commercial product (is that allowed here?)
Lastly...
Are you happy with the solution you're using?
If you were to implement it (again).. how would you change your approach/solution?
WWRHD? Does Red Hat have a mature solution that I have missed completely?
Cheers!
Responses
We don't have a dedicated IPAM product in RHEL.
We do have IPAM components in RHEV and Cloudforms, but looking at the documentation I see no easy (or sensible) way to integrate those components with a wider estate of installations.
In the past I have had success in small deployments with dnsmasq which handles pretty much everything and can leverage the hosts file to manage permanent reservations, but it is a very manual process.
I've never tried scaling dnsmasq up to thousands of clients, so I'm not sure what the manageability would be like, but I suspect cumbersome and performance at that level is a complete unknown to me, however, this thread from upstream, suggests that it copes admirably.
Perhaps dnsmasq with /etc/hosts configured to manage permanent reservations could be considered a way to achieve IPAM with shipped components?
After some discussions, I'm confident in saying RHEV and Openstack both use Neutron to handle networking now.
Neutron, in turn, uses dnsmasq on the backend for dhcp services.
Unpicking the core networking from Neutron's objects approach would be fruitless for the use case under discussion.
The idea you presented above has provoked an awful lot of approving head nodding here in Farnborough.
I can't translate that into concrete action right now, but it seems like a popular idea and worth pursuing.
I'll bring it up with some of the devs in other teams and see if we can push it forward, but don't let that stop you pressing the go button and building something in house!
Hi Pixeldrift,
We do not have IPAM. We have several customers we support in different locations. One of the larger customers we support is a pure static setup and our network team tracks (ala CDO , which is OCD in the proper order) and provides us all needed IPs. We are considering a lone dhcp server for cross-subnet builds, but believe we found a method with RH support to build cross-subnet kickstarts that we will test at some point.
Other customers we support have static and dhcp, and we only temporarily use dhcp solely for cross-subnet builds then assign to static. For these customers, each have a master static list that is maintained and backed up ala CDO.
Happy with it? In our case, the network folks seem typically responsive. Possible change if to do it again? We might have considered IPAM, we may still consider it.
Part of it is "habit". When we were designing our VMware DR scenario (using SRM), they were going through all sorts of evolutions trying to solve the site-to-site IP change problem. I asked the stupid question, "why are you assing about with IP-change scripts when you could just have static DHCP mappings at each end that took care of it for you. It ...hadn't really occurred to them.
That said, DHCP for servers does have some pitfalls. If you go the static DHCP route, you can have hiccoughs in the system where a server reboots, goes to re-grab its IP address from the DHCP server and the DHCP server says "nope, that's already in use." Been there. It's a pain in the ass to deal with. If you go the generic DHCP route, then you have the joy of dealing with DNS registration. For Windows servers, that's generally a no-brainer. For UNIX servers, it's frequently proven to be problematic
In either case, a lot of this is based on things that were broken 5 years ago that now work fairly reliably. But, then again, there's still datacenters that try to disable network autonegotiation because it was so flaky 14 years ago.
Been involved in a couple efforts to improve IP tracking and allocation. Usually, it starts out with someone screaming about "how can we possibly be out of IPs" noting that "this spreadsheet method sucks" and/or "this spreadsheet's out of date: I just ping-scanned several VLANs and came back with X% of the tracked IPs no longer in use. We gotta fix this".
Invariably, someone suggests looking at IPAM solutions. The CIO (or equivalent) gets on board and asks for pricing. Engineers find either pre-packaged solutions or make estimates on how much it would take to home-brew it. CIO (or equivalent) chokes and says, "I guess the spreadsheet isn't so bad".
Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.
