HowTo


I recently booted up a long-powered-off test system for a customer, and realized it was still CentOS 4.7. up2date gives one of these error messages:


Error: Bad/Outdated Mirrorlist and Baseurl

[Errno -1] Header is not complete.
Trying other mirror.

To fix, first run this to update to a newer version of yum:

for i in \
libxml2-2.6.16-12.6.i386.rpm \
libxml2-python-2.6.16-12.6.i386.rpm \
readline-4.3-13.i386.rpm \
python-2.3.4-14.7.el4.i386.rpm \
python-elementtree-1.2.6-5.el4.centos.i386.rpm \
sqlite-3.3.6-2.i386.rpm \
python-sqlite-1.1.7-1.2.1.i386.rpm \
elfutils-0.97.1-5.i386.rpm \
popt-1.9.1-32_nonptl.i386.rpm \
rpm-libs-4.3.3-32_nonptl.i386.rpm \
rpm-4.3.3-32_nonptl.i386.rpm \
rpm-python-4.3.3-32_nonptl.i386.rpm \
python-urlgrabber-2.9.8-2.noarch.rpm \
yum-metadata-parser-1.0-8.el4.centos.i386.rpm \
yum-2.4.3-4.el4.centos.noarch.rpm
do
rpm -Uvh http://archive.kernel.org/centos/4.9/os/i386/CentOS/RPMS/${i};
done

(note: the order is important for the dependencies, and the trailing “\” prevents bash from spitting out interpretation errors, but makes it readable.)

Then edit /etc/yum.repos.d/CentOS-Base.repo: comment out mirrorlist directives, and in each enabled section add baseurl=http://archive.kernel.org/centos/4.9/os/$basearch, baseurl=http://archive.kernel.org/centos/4.9/updates/$basearch, etc.

Note the change from vault.centos.org to archive.kernel.org/centos – the CentOS Vault is handling RPM headers differently than yum in CentOS 4.9 expects. The Kernel.org archive does not.

Part 2 of the seriesYum Repositories for Corporate Deployments

Part 1 of this series is: Yum Repositories: Background, Terminology and basics.

The most basic setup of a corporate yum repository would be to build a “base” repository for building centralized installs of systems off of the netinstall CD/USB stick provided by your Linux distro. The next basic repository to set up would be an “updates” repository, synced back to the distro, for bandwidth management for scheduled updates. We’ll walk through these 2 setups in order.

Setting up the repository host server

The first thing to determine is *how* you’re going to host your repository.  HTTP is probably the most common, but FTP and NFS repositories can be built as well.  The additional work for an NFS repository is minimal, so we’ll cover that as well as HTTP, using apache 2.2 as the HTTP server.

First, install the NFS and HTTP server components as well as the required reposync utility (this information is exactly the same for an NFS or HTTP apt repository for Ubuntu – a post for another time) and set them to autostart:
sudo /usr/bin/yum install nfs httpd yum-utils
sudo /sbin/chkconfig on http
sudo /sbin/chkconfig on nfs

Next set up the apache server to share the repository. This requires deciding *where* to host the repository, so we’ll use /srv/repo as our base. Create a file /etc/httpd/conf.d/repo.conf with these contents:

Alias /centos /srv/repo

<directory /srv/repo>
Options +Indexes
AllowOverride None
order allow,deny
allow from all
</directory>

And now share out that directory via NFS and restart the servers:

echo '/srv/repo *(ro,async)' >> /etc/exports
service nfs restart
service httpd restart

The last suggestion I’d make is to create a subdirectory for each OS major version and its architectures in the root of your repository heirarchy, which is simple with bash expansion:

mkdir -p /srv/repo/{5,6}

Setting up a “base” repository

This is as simple as copying the contents of the install DVDs or CDs into the repository. We’ll use the CentOS materials for these examples, but Red Hat Enterprise Linux and Scientific Linux work the same.
First, we need to create our repository structure. This walkthrough will only demo RHEL 6 derivatives, but 5.x will work the same. This is the first time to think about which repositories to build. I’ll create 3 total: “base”, “updates” and “internal”. “base” and “updates” will be the distrobution-provided repositories, and “internal” will be for our company’s internal software. The “base” repository actually lives in a folder called “os”, and each repository has subfolders for each architecture that it supports. Therefore, we need to create 3 directories in /srv/repo/6: “os”, “updates”, and “internal”, and each of those gets 2 subdirectories for our supported architectures: “i386” and “x86_64”. (Also simple with bash expansion.)

mkdir -p /srv/repo/6/{os,updates,internal}/{i386,x86_64}

Now copy the files:

sudo mount -t auto -o loop,ro CentOS-6.3-x86_64-bin-DVD1.iso /mnt/cd1/
cp -R /mnt/cd1/* /srv/repo/6/os/x86_64/
sudo umount /mnt/cd1
sudo mount -t auto -o loop,ro CentOS-6.3-x86_64-bin-DVD2.iso /mnt/cd1/
cp -R /mnt/cd1/Packages/* /srv/repo/6/os/x86_64/Packages/

If you have SELinux enabled, and you should, you’ll need to ensure that Apache’s httpd daemon can read these files with the following:

chcon -Rv --type=httpd_sys_content_t /srv/repo

This will set the SELinux context of the directories and all subfolders to “httpd_sys_content_t”, which is the context httpd can read.
At this point, if your server is named “build” you can do an install from the netinstall CD, and point it to “http://build/centos/6/os/$basearch” as the install point, and all packages will be installed from that location. If you do this “base” install with the 6.4 sources, when 6.5 comes out, read the Changing “base” Warning below.
A point of note, for people who come here confused – I haven’t mentioned “reposync” or “createrepo” tools yet – those will be used much later, or not at all. For the “base” repository, all of the repository metadata is included in the DVD copy.

Setting up the “updates” repository

The updates repository needs to be updated constantly, or it becomes meaningless This is the default channel by which the distribution provides security and bug fixes for the software packages they ship. There are 2 means for synchronizing a remote repository: reposync and rsync. Since rsync is more widely used and easier for many people to understand, and because the benefits of reposync aren’t easy for me to find, we’ll use rsync.

The “updates” repository is large – the 64-bit x86_64 “updates” repository in our lab is currently 14GB, and the “os” repository is 5.6. Each additional architecture and version supported will have similar size requirements.

If you’ve already set up the initial structure from the instructions for the “base” repository, skip this paragraph: First, we need to create our repository structure. This walkthrough will only demo RHEL 6 derivatives, but 5.x will work the same. This is the first time to think about which repositories to build. I’ll create 3 total: “base”, “updates” and “internal”. “base” and “updates” will be the distrobution-provided repositories, and “internal” will be for our company’s internal software. The “base” repository actually lives in a folder called “os”, and each repository has subfolders for each architecture that it supports. Therefore, we need to create 3 directories in /srv/repo/6: “os”, “updates”, and “internal”, and each of those gets 2 subdirectories for our supported architectures: “i386” and “x86_64”. (Also simple with bash expansion.)

mkdir -p /srv/repo/6/{os,updates,internal}/{i386,x86_64}

SKIP TO HERE.
Now to pull down the repository updates from a mirror. First choose a mirror from the list for your distribution: CentOS, Scientific Linux, or RHEL (ask your support team).
To make sure your “updates” repository doesn’t go out of sync, schedule a cron job, by creating a file in /etc/cron.daily/ or in /etc/cron.hourly/ (See this warning before syncing hourly), called “reposync” and pasting the rsync commands into it (here’s ours):

#!/bin/sh

mirror="mirror.steadfast.net/centos"
repobase="/repo/yum/"

for vers in 5 6; do

rsync -art rsync://${mirror}/${vers}/os/x86_64 ${repobase}/${vers}/os/
if [ $? -ne 0 ]; then
echo "ERROR getting os files from ${mirror} for ${vers} x64, quitting."
exit 1
fi
rsync -art rsync://${mirror}/${vers}/updates/x86_64 ${repobase}/${vers}/updates/
if [ $? -ne 0 ]; then
echo "ERROR getting updates files from ${mirror} for ${vers} x64, quitting."
exit 1
fi
if [ $vers -eq 5 ]; then
rsync -art rsync://${mirror}/${vers}/os/i386 ${repobase}/${vers}/os/
if [ $? -ne 0 ]; then
echo "ERROR getting os files from ${mirror} for ${vers} i386, quitting."
exit 1
fi
rsync -art rsync://${mirror}/${vers}/updates/i386 ${repobase}/${vers}/updates/
if [ $? -ne 0 ]; then
echo "ERROR getting updates files from ${mirror} for ${vers}, quitting."
exit 1
fi
fi
done

The first rsync took about 2 hours, but subsequent syncs are much faster, since rsync can skip existing files. You’ll notice 2 things: 1) that we have disabled syncing the 32-bit CentOS 6 updates; 2) that we also rsync the “os” repo – for more on this, read the Changing “base” Warning below. Our lab doesn’t have or test 32-bit CentOS 6, and we always test “latest” releases. Also, I set the particular mirror to be a variable, so that it would be easy to change if required.

Setting clients to use your repositories

The easiest way to set a single client to use your newly-setup repo is to simply edit the /etc/yum.repos.d/*-Base.repo file by hand, but that’s slow. I put a fixed repo file in the root of the webserver (in /repo/yum/, named CentOS-Base.repo. In it, I’ve simply commented out the mirror line, and replaced it with a local baseurl:

[base]
name=CentOS-$releasever - Base
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
#baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/
baseurl=http://build/centos/$releasever/os/$basearch
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-$releasever

#released updates
[updates]
name=CentOS-$releasever - Updates
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
#baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/
baseurl=http://build/centos/$releasever/updates/$basearch
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-$releasever
enabled=1


This baseurl doesn’t have to be HTTP, since it is a local server and you installed NFS, in that case, use autofs and the file:// URI as:

baseurl=file:///net/build/srv/repo/$releasever/os/$basearch

I haven’t found information that says conclusively if $releasever is properly expanded in that line, but testing shows it works fine in our lab.

Now to test it out on your CentOS 6 client:

wget -O /etc/yum.repos.d/CentOS-Base.repo http://build/CentOS-Base.repo.6
yum clean all
yum upgrade

Now all your yum installs and searches will come from the local repo.

Changing the “base” Source


This is an issue we ran into during testing of this post – you’ll notice all of the rsync scripts pull from “centos/6” not “centos/6.4”, and that we rsync both “updates” and “os” repos nightly.

Which “base” repo you sync against matters. In testing, we initially had the “updates” repo syncing nightly, but not the “os” repo (which was originally staged from a 6.3 DVD. When CentOS 6.4 was released, our “updates” repo shrunk considerably, and we started receiving “Error: Protected multilib versions:” on all updates. This was because “updates” are updates applied against “base”, so they both must be in sync. Therefore, either set up all your systems to sync “6.4” and have no problems, or sync both “base”/”os” and “updates” simultaneously, at the cost of more bandwidth and storage usage.

If you do keep “base” tracking the latest release, your previously-installed systems will continue to upgrade properly – they just will become CentOS 6.4 if they were previously CentOS 6.3. BUT, PXE booting will break. From my colleague:

If you use PXE boot installs, you need to pay attention to when your “base” repo updates so you can copy the appropriate vmlinux and initrd images into the pxe directories.

Staging internally

And a warning about houly syncing.
Most public mirrors will probably blacklist you if you sync hourly. Don’t do that. However, you can set up multiple internal servers, where SERVERA pulls from a public mirror daily, and SERVERB pulls from SERVERA, or even an intermediary. For Example, SERVERA may be the update server your dev lab uses. When patches are verified in dev, you can push updates from SERVERA to a staging location that SERVERB (an SERVERC, etc.) would rsync from, so that production could be “released” to install those patches.

Again: Don’t do hourly syncs to public mirrors.

Series Links

Return to the series header Yum Repositories for Corporate Deployments.

Continue to Part 3 (TBD)

I have a new Lenovo T430 with Nvidia/Intel hybrid graphics. The Intel card is for power saving, the Nvidia card for performance. The easiest way to handle this setup is to just choose the correct card on bootup, but this is inconvenient. Bumblebee provides a way to use the Intel card most of the time, but use the Nvidia for high-performance tasks such as games, and ONLY for those apps. Once it’s set up, it works pretty well, except for a few apps that require additional tweaking, such as Steam, or Wine.

One of the apps that requires tweaking is VMware Workstation (I’m running WKS 9). cmillersp provided a great write-up on the VMware Communities, which is what I set up on my system. VMware Workstation runs great – I can load OpenGL 3D apps in VMs and everything runs fantastic.

Except that I run VMs all the time, and using the Nvidia card all the time kills the performance benefit of the Intel card twice: once because I’m using the Nvidia card, and once again because BOTH GPUs are running at the same time. So I wanted a way to dynamically choose which card to run VMware under, based on whether I was on AC power or battery power.

The result is the attached script below, which I’ll be submitting to the Bumblebee wiki / project, as well as the VMware forums. This is version 1, which works as follows:

  1. Must be installed by “sudo ~/bin/vmware –install” – it will make the cd /usr/lib/vmware/bin; mv vmware-vmx vmware-vmx.real and then do the script creation at /usr/lib/vmware/bin/vmware-vmx, so that it’s a single portable script.
  2. Takes several options to force using the Nvidia card or not
  3. If no force option is applied, automatically determines AC adapter state or battery charge/discharge state
  4. Based on the above, decides whether to launch /usr/bin/vmware normally, or via the instructions from cmillersp

I’ve done some additional work to make it run via either “primusrun” or “optirun”. Optirun is much slower, but at least functional with fewer installs. I hope this is useful to someone else!

Edit: 2012-12-17 – v1.1 added gksu auto-detection
Edit: 2014-01-08 – Moved to github repo: github.com/docsmooth/vmware-bumblebee

Joe and Jorge posted these back in 2005 and 2006, but they’re impossible for me to find in Google lately, possibly because of age:

Moving objects between OUs

(2006-01-05) Creating A Taskpad And Delegating Several Admin Tasks

In order to move an object in DS, you need the following three permissions:
1) DELETE_CHILD on the source container or DELETE on the object being moved
2) WRITE_PROP on the object being moved for two properties: RDN (name) and
CN (or whatever happens to be the rdn attribute for this class, i.e. ou for
org units).
3) CREATE_CHILD on the destination container.
–
Dmitri Gavrilov
SDE, Active Directory Core
This posting is provided “AS IS” with no warranties, and confers no rights.
Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm

But, what, specifically does that mean?

  1. To provide these rights, after delegating control for the Creation and Deletion of the object (Computer/User/etc.), open ADSIEDIT.MSC and navigate to the OU in question.
  2. Right-click the OU and choose “Properties”
  3. Click on the “Security” tab.
  4. Click the “Advanced” button.
  5. Click the “Add” button to add a new security right.
  6. Enter the group you want to delegate the control to and click “OK”
  7. Choose the “Properties” tab.
  8. In the pulldown, choose “Descendent Computer Objects”
  9. Grant:
  1. Read and Write canonicalName
  2. Read and Write name
  3. Read and Write Name

Many are the times we’ve run into DNS configuration problems with Microsoft AD.  After being asked for advice a few more times than normal this year, I’ve pulled together several emails for this list of “Troubleshooting Microsoft AD-integrated DNS” highlights below.  We’ll first cover the generic topics of checking the configuration of your server configuration,  then the configuration of the zones themselves. For each topic, we’ll do a checklist followed by an explanation.

Server configuration:

Checklist

  1. Is the server (Windows 2003 or higher) pointing to itself for primary DNS in the network configuration?
  2. If a standalone DC: Does the server have *no* secondary DNS in the network configuration?
  3. If there are multiple DCs: Does the server list only other DCs in the secondary DNS server list in the advanced network configuration?
  4. Does the server have proper forwarders in the DNS server configuration (to the parent domain or to the ISP, but not both)?
  5. In a command prompt, run the following:
    ipconfig /registerdns
    net stop netlogon
    net start netlogon
  6. Read DNS and System logs to make sure there are no issues being reported.
  7. wait 20 minutes

Explanation

One of the major problems we run into is that customers will put the ISP DNS servers in the network configuration on the DC, not in the DNS Forwarders list in the DNS Server configuration.  The DC *is* a DNS server.  It needs to talk to itself, so that it can register crucial DNS settings in its own database.  If its own database can’t find the information requested (such as www.google.com), then the DNS Server service is responsible for looking that data up, and then caching it so that it’s readily available for other clients, too.  This misconfiguration also has the problem of generating DDNS update requests back to the ISP DNS servers, which are ignored at best, and a security leak at worst (like for military/government installations).

I like to tell my Unix customers “the first rule of administering Active Directory is to go get another cup of coffee.” This forces them to take their hands off the keyboard and wait for cross-site replication (hopefully) before making another change.  It’s a good reminder for the seasoned Windows admins, as well.

Zone Configuration

Reverse Lookup Zones

We’ll cover reverse lookup zones before forward lookup zones, for two reasons: 1) customers screw up reverse lookup configuration much more often than forward lookup configuration ; 2) no SRV records in Reverse zones (normally).

Checklist

If you have non-Microsoft DNS servers or multiple AD domains in your environment

  1. Does the server have reverse DNS zones defined?
  2. Does any *other* server (in the DNS Forwarders configuration list) have the same reverse DNS zone defined?
  3. Do the defined reverse zones allow “unsecured dynamic updates”?
  4. Are all IP subnets in your network defined as reverse DNS zones on the primary DNS servers (the last forwarders in the network before the ISP)?
  5. Do you have aging and scavenging turned on in the server settings?  If so (you should), do you have all clients automatically renewing their records (Windows clients will by default)?

If you only have a single AD domain, or no non-Microsoft DNS servers

  1. Does the server have reverse DNS zones defined for all IP subnets (including IPv6) in your network?
  2. Do those reverse DNS zones allow dynamic updates?
  3. Is aging of old records enabled with sane no-refresh and refresh values  in the reverse zones?

Explanation

Each DNS Zone is a database.  There can only be one authoritative owner of the database, defined by the SOA record on the Zone.  Any other DNS servers get their information from this SOA, either by normal queries, or by zone transfer (AD replication does a kind of zone transfer).  If two servers are set up with the same zone (create 0.168.192.in-addr.arpa reverse DNS zone in dns1.contoso.com and ns1.worldwidetoys.com, for example), then there is no mechanism to transfer the information between those two servers.

For example: any individual client will only talk to the DNS server it’s configured to talk to (client1.contoso.com gets its DNS info from dns1.contoso.com and winxp1.worldwidetoys.com gets its information from ns1.worldwidetoys.com). Each client will also send updates only to its own DNS server.  This means that client1.contoso.com will register its IP 192.168.0.10 with dns1.contoso.com, and winxp1.worldwidetoys.com will register its IP 192.168.0.20 with ns1.worldwidetoys.com.  These two records will never be synched between dns1.contoso.com and ns1.worldwidetoys.com.  Therefore, when winxp1.worldwidetoys.com asks ns1.worldwidetoys.com “who has 192.168.0.10?”, ns1.worldwidetoys.com will answer “nobody!”.

The DNS admin must fix this problem by manually registering all of the records from ns1.worldwidetoys.com in the zone stored in dns1.contoso.com, deleting the 0.168.192.in-addr.arpa zone from ns1.worldwidetoys.com, and then setting up a forwarder or conditional forwarder to dns1.contoso.com.  Now, that same query results in ns1.worldwidetoys.com looking in its own database, finding no answer, and reaching out to its forwarders to ask, “who has 192.168.0.10?”.  Similarly, when winxp1.worldwidetoys.com goes to register 192.168.0.20, it is directed, via the SOA record, to send that registration to dns1.contoso.com.  This is why reverse zones often need to allow unsecured dynamic updates.

Forward Lookup Zones

I have a customer who needs this much data now – I’ll follow up with the Forward Lookup zones in a separate post later this week.

« Previous PageNext Page »