Archive for the 'HowTo' Category

13
Sep
17

Kubuntu 17.04 on Dell e5470 for extreme battery life

So apparently it’s been 5 years since I last updated this series: https://www.totalnetsolutions.net/2012/12/09/lenovo-t430-running-kubuntu-12-10-for-extreme-battery-life/

i’ve restarted most of these configurations over the past 5 years, especially as I’ve switched away from WWAN to tethering, and from spinning rust to SSD, but a lot of the core concepts remain: about 6 years ago my battery died after 3.5 hours of VM troubleshooting, while on a flight, and I lost some data in the emergency “go to sleep, not hibernate”, which cost me 2 hours of rework in the hotel at midnight. My goal, now, is “be able to work multiple simultaneous tasks, have a VM running, and still get super-long battery life when I need it, but not impact performance noticeably.”

With powertop reporting <5W (14h remaining) power consumption while idle, and <6W (11 hours) with firefox open while I start to write this post in August, 2017, I think I’ve hit the mark reasonably well.

As with the Lenovo T430 in the previous post, everything I care about works right out of the box, but when I first started my custom kernels, I missed a few things that I had to add back in before writing this up.

Hardware


CPU: Intel(R) Core(TM) i5-6300U CPU @ 2.40GHz
Memory: 16GB RAM
VGA compatible controller: Intel Corporation HD Graphics 520 (rev 07)
Ethernet controller: Intel Corporation Ethernet Connection I219-LM (rev 21)
Network controller: Intel Corporation Wireless 8260 (rev 3a)

Audio device: Intel Corporation Sunrise Point-LP HD Audio (rev 21)
Bluetooth device: Intel (integrated on the USB)
Synaptics Touchpad and Twiddler Mouse

Jump to main sections with these links:
CPU Configuration
Network Configuration
Video Configuration
Encryption / Security configuration
Battery saving configuration
Custom kernel .config

CPU and Battery


I have a series of posts on my love of getting the most performance and battery life I can from my systems, see the last and first of the series for a bit more, or dig through my twitter on the subject. What’s new this year is the latest i5 Core CPU with Linux 4.4 has a new “pstate” performance governor that’s not actually buggy anymore, if you configure it right. I used to use the acpi_cpufreq governors “ondemand” on AC and “conservative” on battery. But the new pstate drivers apparently perform better (thanks Phoronix) AND scale down for battery savings better, so I needed to switch that. Since I was switching governors, I figured it was time to re-check my 2007 finding that moving from “GENERIC_CPU” to “MCORE2” saved me 30+ minutes alone.

Well, it does. But I didn’t keep the data, sorry. What this means is I once again needed to custom compile a kernel to get the right CPU options, and to get the new pstate driver. Since I was in doing that, and since the 2012 post, I’ve moved away from the default kernel Scheduler to Colin Kalvis’ BFS scheduler, so we get to patch THAT in as well. More on those options down at the custom kernel config section, but the point here is that BFS added some stability to heavy “running multiple VMs, and processing 4GB of raw data in Perl” swapping problems I was having, even with 16GB RAM, as well as not hurting my battery life, with a high possibility of 10-20 minutes extra life on normal operations.

The battery in the system is designed at 8157000 mWh, and after 6 months is down to 6621000 mWh. The “amount of time running” is based on the past 2 months, not day 1 of receiving the laptop.

Lastly, I’m still using cpufreqd, but the configuration is vastly simplified – pstate “powersave” when my AC is not plugged in, or the battery’s below 70%, and pstate “performance” otherwise. Ubuntu fixed the broken cpufreqd daemon sometime in 2014, so I’m back to the distribution default version of that, yay!

My custom cpufreqd.conf.

Network


I stopped using my jumbo frames script from here in Kubuntu 16.10, because apparently NetworkManager can figure that out on its own, and it’s been relatively successful. My wireless adapter connects to the new Netgear T6400 at near gigabit speed, but the R6400 doesn’t support jumbo frames itself, so I’m segmenting off some new VLANs to break the Jumbo Frames hosts from the wireless / nonintelligent hosts. That’ll mean resurrecting my jumbo frames script to instead set the VLAN Tag when I’m home.h

Sound


Sound has always been a joke for Linux users, but the Intel HD-Audio has been really solid for me for several years, especially with pulseaudio actually being relatively stable for me. When I recieved the laptop, I was having a problem where full-duplex audio was causing what appeared to be a storm of interrupts that hung the entire laptop. But about 2 months of debugging resulted in “I built a new kernel, and now it works fine.” I don’t know if it was a bug in the codec in the kernel, or something that silently patched. How I have Bluetooth audio headset, bluetooth headset for online conference calls, and appropriate switching for apps and reminders (reminders / alerts go to speakers and bluetooth in my config, in case I take the headphones off), with the options int he PulseAudio configuration in KDE. I have no “.asoundrc” or /etc/asound or /etc/pulse or ~/.pulse/client.conf anymore either, which is great!

Video


For the first time in years, I do not have a multi-graphics card system to deal with. The i915 driver works out of the box and is unremarkable, but functional. And great for battery life. But boring to discuss.

Encryption and Security

Encryption

During installation, I chose the option to use an encrypted LVM volume. This uses DM-Crypt to encrypt the full HDD, so that it has to be unlocked at boot time. The Kubuntu installer seems to forget this fact, so it also asks you to set up ecryptfs private home directories, which is NOT neccessary for a single-user laptop, since the whole OS is already encrypted. The only oddity with dm-crypt is that sometimes the splash screen prompt to unlock the computer doesn’t show. Originally, if I just wait for disk activity to disappear, and have a blank screen, I can just type the passphrase, and it’ll still unlock successfully. But I instead made a change to /etc/default/grub:

GRUB_CMDLINE_LINUX_DEFAULT="noquiet nosplash"

And now I don’t get the splash screen, and the prompt comes up properly right away. And I get all the hacker-y looking boot errors from systemd.

Security

Because this laptop has some sensitive work information on it, I wanted to get a bit more paranoid with the “unattended on a conference table” and “connected to a public wifi network” situations, especially since I actually have OpenSSH listening on all interfaces (yes, I ssh into my laptop from my phone more often than you do). I purchased a multi-protocol Yubikey and downloaded and installed the Yubikey PAM module for Challenge-Response, with the instructions on their website, here. Combined with Active Directory authentication, my cached user can only log in if the Yubikey is inserted into the laptop. So when I step away in meetings, the laptop locks, and my password can’t be cracked.

For additional security, I enrolled the root account on my laptop into my BeyondTrust Password Safe (since I’m working for them right now). This product rotates the root password every 14 days with a 50+-character random passcode, so even an attacker getting physical access once it’s booted (decrypted) will have little chance of breaking into the box even when I have the yubikey in place.

Additional Power Savings


I still run laptop-mode tools to cut down on power utilization from non-CPU peripherals. I could get more by having the ethernet port actually turn off when on battery, but I actually use it on battery quite a lot, so I’m not sure the hassle of re-enabling it is worth the battery savings. Here are the configurations I use:
intel-hda-powersave
intel-sata-powermgmt
intel_pstate
laptop-mode
runtime-pm
wireless-iwl-power
cpuhotplug
bluetooth
battery-level-polling
ethernet

What I’m now getting is 5-7W of power utilization while online with firefox and chrome both running, bluetooth running, and no VMs. Booting my Windows VM in VMware Workstation bumps me up to 15-20W, but I’m still getting 5 hours of battery life with no features disabled AND running a full Windows VM (the Windows VM has battery detection disabled, too). My non-VM battery life is reporting in the 9-11 hour range, but I’ve never had to use it that long to worry.

Kernel Config


I use the Ubuntu Kernel sources, mostly because the laptop tells me when there’s new Kernel sources with security fixes. I’m using BFS as my scheduler, which is fantastic when I get into “3 VMs using 12GB RAM and a reporting job wanting another 6GB” swap death. I have enough keyboard control to kill the reporting job, then shut down the VMs, and try the reporting job again. Before BFS, I either waited 6 hours, or rebooted the whole damn laptop.
BFS patches are here. If that doesn’t make sense, don’t use them. Please.
My custom kernel .config is here.

Building

The build system I use is the same as in 2012:

sudo apt-get install fakeroot build-dep linux-image-`uname -r`
sudo apt-get install linux-source
sudo usermod -a -G src YOUR_USERNAME

Now log out and back in, so that you’re a member of the “src” group.

cd /usr/src
sudo chown -R $USER:src .
tar -jxf ./linux-source-4.4.0/linux-source-4.4.0.tar.bz2
ln -s linux-source-4.4.0 linux
cd linux
wget http://www.totalnetsolutions.net/wp-content/uploads/2012/12/rob-config-20121204c.txt
mv rob-config-20121204c.txt .config
make oldconfig
make menuconfig

Make any changes you want in here, then exit and save

make-kpkg --initrd --rootcmd fakeroot --append-to-version=.20170912a kernel_image kernel_headers

You’ll get 2 DEB files in /usr/src that you can then install and boot to. the “append-to-version” I use as a dating system for my kernels. “20170912a” means the 2nd kernel attempt on September 12, 2017, the day I’m writing this post (first attempts get no letter).

24
Aug
17

Invalid ISTG in RODC-only Site

Our primary lab here at totalnetsolutions.net is a bit space-constrained (RAM constantly 50% overcommited in ESX), so we have it limited to only 2 domains, a parent, and a child. There’s 2 sites, one of them is a “BranchOffice” with only an RODC. This serves us well for the majority of customer cases, especially since the DHCP pool serves the BranchOffice site, which is routed via DSL to the CorpOffice site.

Recently we needed a 3rd domain (2nd child) for testing a particular user/group creation scheme for a customer. Upon completing testing in August, to reduce RAM utilization on the ESX host (and because it makes the AD Schema even more “real-world”, we dcpromo’ed the DC for the 3rd domain, deleting the entire domain. We deleted the objects, and ran the ntdsutil metadata cleanup, like good AD admins. Then we watched the logs fill up with Information events only for a few weeks, and forgot about the episode.

The thing was: the DC in the 3rd domain, because it was short-lived, was left with its DHCP address (it was never a DNS server), so it got added to the BranchOffice site. According to this 2009 Technet AD Troubleshooting Blog entry from Ingolfur Strangeland, RODCs won’t register as the Intersite Topology Generator (ISTG) for the site, but they will perform that role for themselves. (Background on the ISTG.) Because of this, the *new* DC, which was writeable, took over the role of the ISTG for the site, and made sure that all the replication connections went through it, as the only writeable DC in the site.

The problem is, we removed this server. When we removed the writeable DC in the BranchOffice site, there was no other server available to write “I’m now the ISTG” and replicate that setting outbound to the other site(s)… because the only remaining server was an RODC.

We discovered the problem 2 months later (Rob had a baby and wasn’t paying any attention to the lab for a while) when, after creating 1,000,000 new users in the lab, replication was surprisingly slow, but only into the Branch office. So slow that when we joined new computers, they’d boot up with “Service Principal Unknown” errors, and AD users couldn’t log in. In Active Directory Sites and Services we saw that the ISTG Server and Site were both “Invalid”.  This post from 2011 discusses how to move this, but not if it works for RODCs… the good news is that it does:

  1. open the Configuration container, like with adsiedit.msc
  2. Expand CN=Sites, and then the site with the broken ISTG (likely the site with the RODC)
  3. Double-click to open the properties of CN=NTDS Settings
  4. Find the value: “InterSiteTopologyGenerator” and paste in the full DN (from the Configuration Container, not the RootDN) of the RODC
    1. This is the “distinguishedName” value of the CN=<servername>,CN=<sitename>,CN=Sites,CN=Configuration object of the server in the site in question that *should* be the ISTG.
  5. Click “OK” to Save, use ‘repadmin’ or dssite.msc (AD Sites and Services) to force replication and wait 15 minutes (or your own inter-site replication time)

 

24
Aug
17

Signing RPM packages for Multiple RHEL versions

Signing RPMs is supposed to be easy:

gpg --list-keys
cat > ~/.rpmmacros < <EOL
%_topdir /home/rob/rpmbuild
%debug_package %{nil}
%_signature gpg
%_gpg_name Rob A <me@totalnetsolutions.net>
%{__gpg} \
gpg --digest-algo=sha1 --batch --no-verbose --no-armor \
--force-v3-sigs --passphrase-fd 3 --no-secmem-warning -u "%{_gpg_name}" \
-sbo %{__signature_filename} %{__plaintext_filename}
EOL
rpm --resign

But there are a lot of caveats:

  1. If you run this on RHEL or CentOS or Scientific Linux 6.x and have an RSA, rather than a DSA, GPG key, any older systems (5.x or 4.x) won’t be able to properly decode the signatures. If you have a DSA key (and only v6+), the --force-v3-sigs is enough.
  2. If you run this on RHEL or CentOS 5.x, you need to remove the “–force-v3-sigs” option from the .rpmmacros line
  3. http://technosorcery.net/blog/2010/10/10/pitfalls-with-rpm-and-gpg/ is a bit wrong – RSA keys don’t always work, DSA do (there’s apparently a RHN KB article about this, if you have support and licenses to read it. I don’t right now).
  4. RHEL / CentOS / Scientific Linux 7 won’t accept (without warning) RPMs signed with weak keys (or weak digests, in some cases, like sha1)

So, based on the systems you’re trying to build your package for, ensure the signing key you’re using and the digest algorithm you’re using, are supported across all the versions you expect to support… or build multiple RPMs, and put them in separate repos for each OS version.

04
Oct
13

Updating CentOS 4.9

I recently booted up a long-powered-off test system for a customer, and realized it was still CentOS 4.7. up2date gives one of these error messages:


Error: Bad/Outdated Mirrorlist and Baseurl

[Errno -1] Header is not complete.
Trying other mirror.

To fix, first run this to update to a newer version of yum:

for i in \
libxml2-2.6.16-12.6.i386.rpm \
libxml2-python-2.6.16-12.6.i386.rpm \
readline-4.3-13.i386.rpm \
python-2.3.4-14.7.el4.i386.rpm \
python-elementtree-1.2.6-5.el4.centos.i386.rpm \
sqlite-3.3.6-2.i386.rpm \
python-sqlite-1.1.7-1.2.1.i386.rpm \
elfutils-0.97.1-5.i386.rpm \
popt-1.9.1-32_nonptl.i386.rpm \
rpm-libs-4.3.3-32_nonptl.i386.rpm \
rpm-4.3.3-32_nonptl.i386.rpm \
rpm-python-4.3.3-32_nonptl.i386.rpm \
python-urlgrabber-2.9.8-2.noarch.rpm \
yum-metadata-parser-1.0-8.el4.centos.i386.rpm \
yum-2.4.3-4.el4.centos.noarch.rpm
do
rpm -Uvh http://archive.kernel.org/centos/4.9/os/i386/CentOS/RPMS/${i};
done

(note: the order is important for the dependencies, and the trailing “\” prevents bash from spitting out interpretation errors, but makes it readable.)

Then edit /etc/yum.repos.d/CentOS-Base.repo: comment out mirrorlist directives, and in each enabled section add baseurl=http://archive.kernel.org/centos/4.9/os/$basearch, baseurl=http://archive.kernel.org/centos/4.9/updates/$basearch, etc.

Note the change from vault.centos.org to archive.kernel.org/centos – the CentOS Vault is handling RPM headers differently than yum in CentOS 4.9 expects. The Kernel.org archive does not.

02
Oct
13

Setting up a corporate yum repository mirror for bandwidth and staged update management

Part 2 of the seriesYum Repositories for Corporate Deployments

Part 1 of this series is: Yum Repositories: Background, Terminology and basics.

The most basic setup of a corporate yum repository would be to build a “base” repository for building centralized installs of systems off of the netinstall CD/USB stick provided by your Linux distro. The next basic repository to set up would be an “updates” repository, synced back to the distro, for bandwidth management for scheduled updates. We’ll walk through these 2 setups in order.

Setting up the repository host server

The first thing to determine is *how* you’re going to host your repository.  HTTP is probably the most common, but FTP and NFS repositories can be built as well.  The additional work for an NFS repository is minimal, so we’ll cover that as well as HTTP, using apache 2.2 as the HTTP server.

First, install the NFS and HTTP server components as well as the required reposync utility (this information is exactly the same for an NFS or HTTP apt repository for Ubuntu – a post for another time) and set them to autostart:
sudo /usr/bin/yum install nfs httpd yum-utils
sudo /sbin/chkconfig on http
sudo /sbin/chkconfig on nfs

Next set up the apache server to share the repository. This requires deciding *where* to host the repository, so we’ll use /srv/repo as our base. Create a file /etc/httpd/conf.d/repo.conf with these contents:

Alias /centos /srv/repo

<directory /srv/repo>
Options +Indexes
AllowOverride None
order allow,deny
allow from all
</directory>

And now share out that directory via NFS and restart the servers:

echo '/srv/repo *(ro,async)' >> /etc/exports
service nfs restart
service httpd restart

The last suggestion I’d make is to create a subdirectory for each OS major version and its architectures in the root of your repository heirarchy, which is simple with bash expansion:

mkdir -p /srv/repo/{5,6}

Setting up a “base” repository

This is as simple as copying the contents of the install DVDs or CDs into the repository. We’ll use the CentOS materials for these examples, but Red Hat Enterprise Linux and Scientific Linux work the same.
First, we need to create our repository structure. This walkthrough will only demo RHEL 6 derivatives, but 5.x will work the same. This is the first time to think about which repositories to build. I’ll create 3 total: “base”, “updates” and “internal”. “base” and “updates” will be the distrobution-provided repositories, and “internal” will be for our company’s internal software. The “base” repository actually lives in a folder called “os”, and each repository has subfolders for each architecture that it supports. Therefore, we need to create 3 directories in /srv/repo/6: “os”, “updates”, and “internal”, and each of those gets 2 subdirectories for our supported architectures: “i386” and “x86_64”. (Also simple with bash expansion.)

mkdir -p /srv/repo/6/{os,updates,internal}/{i386,x86_64}

Now copy the files:

sudo mount -t auto -o loop,ro CentOS-6.3-x86_64-bin-DVD1.iso /mnt/cd1/
cp -R /mnt/cd1/* /srv/repo/6/os/x86_64/
sudo umount /mnt/cd1
sudo mount -t auto -o loop,ro CentOS-6.3-x86_64-bin-DVD2.iso /mnt/cd1/
cp -R /mnt/cd1/Packages/* /srv/repo/6/os/x86_64/Packages/

If you have SELinux enabled, and you should, you’ll need to ensure that Apache’s httpd daemon can read these files with the following:

chcon -Rv --type=httpd_sys_content_t /srv/repo

This will set the SELinux context of the directories and all subfolders to “httpd_sys_content_t”, which is the context httpd can read.
At this point, if your server is named “build” you can do an install from the netinstall CD, and point it to “http://build/centos/6/os/$basearch” as the install point, and all packages will be installed from that location. If you do this “base” install with the 6.4 sources, when 6.5 comes out, read the Changing “base” Warning below.
A point of note, for people who come here confused – I haven’t mentioned “reposync” or “createrepo” tools yet – those will be used much later, or not at all. For the “base” repository, all of the repository metadata is included in the DVD copy.

Setting up the “updates” repository

The updates repository needs to be updated constantly, or it becomes meaningless This is the default channel by which the distribution provides security and bug fixes for the software packages they ship. There are 2 means for synchronizing a remote repository: reposync and rsync. Since rsync is more widely used and easier for many people to understand, and because the benefits of reposync aren’t easy for me to find, we’ll use rsync.

The “updates” repository is large – the 64-bit x86_64 “updates” repository in our lab is currently 14GB, and the “os” repository is 5.6. Each additional architecture and version supported will have similar size requirements.

If you’ve already set up the initial structure from the instructions for the “base” repository, skip this paragraph: First, we need to create our repository structure. This walkthrough will only demo RHEL 6 derivatives, but 5.x will work the same. This is the first time to think about which repositories to build. I’ll create 3 total: “base”, “updates” and “internal”. “base” and “updates” will be the distrobution-provided repositories, and “internal” will be for our company’s internal software. The “base” repository actually lives in a folder called “os”, and each repository has subfolders for each architecture that it supports. Therefore, we need to create 3 directories in /srv/repo/6: “os”, “updates”, and “internal”, and each of those gets 2 subdirectories for our supported architectures: “i386” and “x86_64”. (Also simple with bash expansion.)

mkdir -p /srv/repo/6/{os,updates,internal}/{i386,x86_64}

SKIP TO HERE.
Now to pull down the repository updates from a mirror. First choose a mirror from the list for your distribution: CentOS, Scientific Linux, or RHEL (ask your support team).
To make sure your “updates” repository doesn’t go out of sync, schedule a cron job, by creating a file in /etc/cron.daily/ or in /etc/cron.hourly/ (See this warning before syncing hourly), called “reposync” and pasting the rsync commands into it (here’s ours):

#!/bin/sh

mirror="mirror.steadfast.net/centos"
repobase="/repo/yum/"

for vers in 5 6; do

rsync -art rsync://${mirror}/${vers}/os/x86_64 ${repobase}/${vers}/os/
if [ $? -ne 0 ]; then
echo "ERROR getting os files from ${mirror} for ${vers} x64, quitting."
exit 1
fi
rsync -art rsync://${mirror}/${vers}/updates/x86_64 ${repobase}/${vers}/updates/
if [ $? -ne 0 ]; then
echo "ERROR getting updates files from ${mirror} for ${vers} x64, quitting."
exit 1
fi
if [ $vers -eq 5 ]; then
rsync -art rsync://${mirror}/${vers}/os/i386 ${repobase}/${vers}/os/
if [ $? -ne 0 ]; then
echo "ERROR getting os files from ${mirror} for ${vers} i386, quitting."
exit 1
fi
rsync -art rsync://${mirror}/${vers}/updates/i386 ${repobase}/${vers}/updates/
if [ $? -ne 0 ]; then
echo "ERROR getting updates files from ${mirror} for ${vers}, quitting."
exit 1
fi
fi
done

The first rsync took about 2 hours, but subsequent syncs are much faster, since rsync can skip existing files. You’ll notice 2 things: 1) that we have disabled syncing the 32-bit CentOS 6 updates; 2) that we also rsync the “os” repo – for more on this, read the Changing “base” Warning below. Our lab doesn’t have or test 32-bit CentOS 6, and we always test “latest” releases. Also, I set the particular mirror to be a variable, so that it would be easy to change if required.

Setting clients to use your repositories

The easiest way to set a single client to use your newly-setup repo is to simply edit the /etc/yum.repos.d/*-Base.repo file by hand, but that’s slow. I put a fixed repo file in the root of the webserver (in /repo/yum/, named CentOS-Base.repo. In it, I’ve simply commented out the mirror line, and replaced it with a local baseurl:

[base]
name=CentOS-$releasever - Base
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
#baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/
baseurl=http://build/centos/$releasever/os/$basearch
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-$releasever

#released updates
[updates]
name=CentOS-$releasever - Updates
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
#baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/
baseurl=http://build/centos/$releasever/updates/$basearch
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-$releasever
enabled=1


This baseurl doesn’t have to be HTTP, since it is a local server and you installed NFS, in that case, use autofs and the file:// URI as:

baseurl=file:///net/build/srv/repo/$releasever/os/$basearch

I haven’t found information that says conclusively if $releasever is properly expanded in that line, but testing shows it works fine in our lab.

Now to test it out on your CentOS 6 client:

wget -O /etc/yum.repos.d/CentOS-Base.repo http://build/CentOS-Base.repo.6
yum clean all
yum upgrade

Now all your yum installs and searches will come from the local repo.

Changing the “base” Source


This is an issue we ran into during testing of this post – you’ll notice all of the rsync scripts pull from “centos/6” not “centos/6.4”, and that we rsync both “updates” and “os” repos nightly.

Which “base” repo you sync against matters. In testing, we initially had the “updates” repo syncing nightly, but not the “os” repo (which was originally staged from a 6.3 DVD. When CentOS 6.4 was released, our “updates” repo shrunk considerably, and we started receiving “Error: Protected multilib versions:” on all updates. This was because “updates” are updates applied against “base”, so they both must be in sync. Therefore, either set up all your systems to sync “6.4” and have no problems, or sync both “base”/”os” and “updates” simultaneously, at the cost of more bandwidth and storage usage.

If you do keep “base” tracking the latest release, your previously-installed systems will continue to upgrade properly – they just will become CentOS 6.4 if they were previously CentOS 6.3. BUT, PXE booting will break. From my colleague:

If you use PXE boot installs, you need to pay attention to when your “base” repo updates so you can copy the appropriate vmlinux and initrd images into the pxe directories.

Staging internally

And a warning about houly syncing.
Most public mirrors will probably blacklist you if you sync hourly. Don’t do that. However, you can set up multiple internal servers, where SERVERA pulls from a public mirror daily, and SERVERB pulls from SERVERA, or even an intermediary. For Example, SERVERA may be the update server your dev lab uses. When patches are verified in dev, you can push updates from SERVERA to a staging location that SERVERB (an SERVERC, etc.) would rsync from, so that production could be “released” to install those patches.

Again: Don’t do hourly syncs to public mirrors.

Series Links

Return to the series header Yum Repositories for Corporate Deployments.

Continue to Part 3 (TBD)




About Us

Complete networking solutions for business.
October 2017
M T W T F S S
« Sep    
 1
2345678
9101112131415
16171819202122
23242526272829
3031