'out of his depth' question: What are good ways to have structured, controlled patching of servers across environments?
We have about 5 dozen RHEL and Centos servers at various levels. I am part of a small team of 2.2 members. My team and predecessors never addressed how to roll out patches, so every security audit becomes a panic patch session. I have been asked to fix the root of the problem and proactively patch on a monthly schedule similar to what some other OSes are doing. I have been instructed to apply patches to a Proof Of Concept bunch of servers (that i will have to build) for testing, then roll out the patches to Dev for let more testing, then the other areas. I am new to the Linux thing, so i am a very unsure of the correct direction that will not lead to administrative h***.
My current working idea is this: Host a yum repo for each Environment we need to separate. Then, puppet can be set to install the "latest" version of everything, but I control what "latest" means with the repo. I can populate the repo by having a purpose built server just download the necessary updates and dependencies, and copy these to the POC repo. These patches can be promoted to the other yum repos as tests pass and other departments give the green light. But boy, this seems like alot of work.
The backup idea is to specify the patch level of each package for each environment in puppet. This also seems like its own special level of h***. Maybe my puppet-foo is what needs to be fixed?
Yes, I talked to my team about this. They are surprisingly non-committal. Am I re-engineering the wheel? Is there an elegant way to do this that my noobness is not aware of yet? I dont want to be just a package patcher all day every day...
(ps: How is there a "chef" and "ansible" tag but not a "puppet" tag for these questions?)