VMs and System Creation Date

Latest response

Has anyone tackled to problem of making the initial build date of VMs built from templates be representative of when the VM was instantiated rather than when the clone-source was installed? Was really hoping that there was a way to massage the basesystem RPM's installation date as part of the newly-instantiated VM's first-boot process.

Responses

Tom,

Are you thinking along the lines of the features of /usr/sbin/aide (on the newly cloned system)? Maybe having /usr/sbin/aide run just before first boot with it's initialization (as a start script, and only once)? And then at a schedule that seems fit for your use, run the aide check again?

Have you ever used the rpm profile feature in the Satellite server? It would not offer a date, but it would give you a (somewhat tacit) idea of the inventory/version of rpms at the point of running the profile in the satellite server on the new system.

This next bit will not fulfill your last sentence, but perhaps create a file to show when the system was cloned with VMware. Perhaps just before executing the clone, from the master system being cloned run a script that creates a start script at the default run level that puts the date into a file prior to first boot, perhaps /etc/instantiated with a date/time stamp? Then when the cloned system goes through the boot process, it will make the file. Of course RHEL 7 would be a bit different. Maybe the aide method I mentioned would give greater detail on the rpms.

Not sure if I have missed your intention here... I suspect you're describing more...

Kind Regards

On physical systems, querying the installtime for the basesystem RPM was a fairly reliable method for determining install date/time (assuming that your system's time was set currectly during KickStart).

By comparison, many of the "history of the system" types of filesfairly unreliable in determining system build time (e.g., someone whacks *tmp files or the like). Then again, if the internal timekeeping info for RPMs was easily massaged, it would make them less reliable sources, as well.

Basically, in our large scale environment run by a distributed group of administrators, it's good to be able to determine when a system was built. Knowing when can help identify who built it by giving a tighter search window. Knowing who is sometimes necessary so that you can address problems that have come up, directly, rather than having to send a broadast "when building systems, make sure to do all the prescribed steps and in the prescribed order".

(edited) Thanks Tom, that clarifies what you're addressing - I'd be interested in what you are describing as well; our environment has a similar situation you describe with administrators. But I'm fortunate, in most cases, I typically know who has built what systems in my environment, and it goes through a process of some oversight.

What we deal with has to do with after the system is built. For those systems I've been using aide and check-in of configuration files. This is somewhat different from what you describe...

I leave a few touch-files in a few spots to indicate that
* kickstart occured
* bootstrap was run

Also - I run

stat /root/anaconda-ks.cfg 

but.. I get a feeling I am over-simplifying the goal here ;-)

Yeah. I'm looking for options that are less "fragile". If I could manipulate/update the "%{installtime}" attribute for the basesystem RPM, that would likely be less "fragile" (mostly via obscurity and/or level of effort) than relying on touch files that may be obliterated or re-touched (and the specific files cited, unless my first-boot re-touched them, would have the template-creation dates rather than my first-boot date).

Tom,

What if you ran something (perhaps a perl script) to detect 'new' systems within your network, and then run a yum reinstall of one or more rpms on these newly discovered systems? Would you need something as detailed as the use of the 'aide' program?

Are these systems rejoined to a satellite server? Or are they showing up as a duplicate if previously joined and cloned?

No Satellite and the incumbent provisioning/patching/remote-scripting framework is being retired. Don't really have the need to signature everything, just something that will give a persistent "this system was built on date 'YYYMMMDD'" so that the change in frameworks (etc.) won't matter.

Tom, if I understand this, are the admins you speak of just cloning systems kinda at will and then you discover them later?

I do not know of a vmware method to add such a file upon a clone... (maybe the vmware folks could help with that, didn't see anything at vmware.com) The recent cloning we did on a server did not seem to allow creating such a file during the clone process.

My initial thought that might help to a degree is perhaps have a perl script that checks for newly introduced systems and when it finds one, creates or triggers the creation of an /etc/instantiated file with the comment you wanted perhaps
- looks for /etc/instantiated with the system's hostname -s and the statement you mentioned. If the hostname doesn't match, it either adds a line to /etc/instantiated or overwrites it and saves the other one as /etc/intantiated.old.timestamp (or takes whatever action you wish).

Our satellite servers enter in the statement upon a kickstart (certainly not a vmware clone) "RHN Satellite kickstart on $(date +'%Y-%m-%d') into /etc/motd .

Yes to "cloning at will"; no to "discover them later". With our template creation process, we install a "run once" script that takes care of automagically adding the system to our CM/patch framework. Problem is, that framework is slated to be retired within 18 months and a lot of the history data will disappear with it . Thus, coming up with a consistent and persistent method for identifying hosts' build times is desired. Since we're a mixed VM/physical environment, the ideal would be a method that's universal to both types of deployments. Previously, when we were mostly physical, that method was querying the basesystem RPM (i.e., rpm -q --qf '%{installtime:date}\n' basesystem). This is a method that's reliable on physicals but not so much on virtuals.

Conceivably, before the framework is retired, it could be used to run an enterprise-wise job to ensure that the managed systems' basesystem RPM's %{installdate} value lines up with the host's initial injection into the framework (part of the KickStart for physicals; part of the run-once for VMs cloned from templates).

While touch files, news files, motd files and the like are doable, they're "fragile". That is, they're easily clobbered. Something like "massaging" the RPM database to reset the install dates of the RPMs present at first boot would be ideal, as it would be notionally more difficult for such data to get accidentally clobbered. Would likely be a hedge against intentional clobbering, since the method for massaging the RPM database entries is apparently relatively obscure (e.g., no one on this thread has pointed out a "use at your own risk"/"you really oughtn't do this" method of "here's how to edit your RPM database entries' values").

Tom

I think I remember you receiving a rare award at the Red Hat Summit, and you've always seemed highly competetent, so I suspected you already knew the dangers of what you propose, and even corrected me on something technical I missed in a response to a person here in this discussion area in a different post (thanks by the way). It is extremely improbable I'd ever edit the rpm dateabase entries directly, however I'm interested in knowing about it. I would suspect there's got to be another method to leave a obscure and unique fingerprint.

I brought up the idea of the file because in a previous post in this discussion you mentioned... (and I misinterpreted, now I know the difference)

"Don't really have the need to 
signature everything, just 
something that will give a 
persistent "this system was 
built on date 'YYYMMMDD'" "

I falsely translated that to an actual file that would show that. Now I know you mean something different.
For a file I don't want changed, something less fragile (but still fragile), would be to 'chattr +i /path/to/some/file', but again, fragile.

One question (because I do not know and I believe you do), what (and how) would you edit in the rpm database that may not get altered later with subsequent updates, and also not cause consternation to the rpm database??

Good luck (but it seems you will make your own good luck)

Heh, yeah. I had to do that to our NetBackup servers to force the NetBackup operators to correct DNS rather than working around DNS issues with /etc/hosts. Problem with chattr is that it's (well, was: once this thread gets into Google, though...) a lot easier to find out how to "fix" it when root gets a "permission denied" on changing a file. One of our more clued admins (since departed) figured out my chattr, so I had to get a bit trickier with it (tangent: really wish we'd just had Puppet or an equivalent to govern such config-drifts).

The "basesystem" RPM normally only ever gets touched when a system is first kickstarted. So, the info associated with that file is fairly persistent and non-fragile. Had I needed to resort to a direct edit, I'd probably have to tear apart one of the Berkeley DBM files in /var/lib/rpm to do it. But, that's obviously fraught with error potential. Hosing the __db.NNN files is one thing; hosing the files from which those are generated could be more problematic.

Actually, just figured it out:

# yumdownloader basesystem
# rpm -q --qf '%{installtime:date}\n' basesystem
Tue 12 Jul 2011 11:24:06 AM EDT
# rpm -i --force --justdb basesystem-10.0-4.el6.noarch.rpm
# rpm -q --qf '%{installtime:date}\n' basesystem
Wed 11 Jun 2014 09:21:13 PM EDT

Easier than expected.

Excellent work man!

Except that I should have snapshotted that VM so I could roll-back. Now I need to decide if it's worth the effort to "nudge" it back to its original date. :p

Nice work Tom.

Another question, I would suspect not, but does 'basesystem' ever get updated later? Is there any chance of an unanticipated change for that rpm that would clobber that change?

Unless you force it (as shown above), no. Or at least, I've never known it to change.

$ rpm -ql basesystem
(contains no files)

Pretty much, it's sole purpose is to be informational.

Nice choice of rpm,
I just found this link showing only 12 (total) instances of the basesystem rpm ranging from RHEL 3 to RHEL 6.

Surprised there's that many. I guess it's a side effect of how their code-control system is organized. If you notice, most of those twelve are repeats of each other (at least within the same $releasever)

Right

Tom,

Thanks for this info, used it today on a cloned system.

Using Tom's bit from above again today...

# yumdownloader basesystem
# rpm -q --qf '%{installtime:date}\n' basesystem
Tue 12 Jul 2011 11:24:06 AM EDT  # a date in long ago past
# rpm -i --force --justdb basesystem-10.0-4.el6.noarch.rpm
# rpm -q --qf '%{installtime:date}\n' basesystem
Wed 11 Jun 2014 09:21:13 PM EDT  # something much more current

Thanks Tom

I have personally used the date on the /etc/ssh/ssh_host_key file which is created on first boot for this purpose, but I appreciate what Tom is saying about files on the filesystem potentially being 'touched' and their date changing. Think I might adopt this rpmdb approach too.

Thanks for bumping this thread R. Hinton, I missed it first time round!

Pretty much anything can get dicked with - even this "solution". I just tend to prefer, if I'm going to semi-rely on something (as an audit-point), that it's a something that requires at least a modicum of research for someone to figure out how to make that audit method unreliable. =)

Tom, I've been doing this (your great idea above) with our VMware clones, when we do clone some VMware system (not very often). I additionally make new ssh host keys. There's a few other things I have, I can post the doc I have on this at a later date when I get back at a different customer site.

Thanks Tom

-RJ

Close

Welcome! Check out the Getting Started with Red Hat page for quick tours and guides for common tasks.