Wednesday, November 27. 2013
Quote of myself from four years ago:
I had to do the migration to a parallel machine and had only one week to accomplish this. I’ll never do it this way again, however, rather pay for two servers for a short time and decide when to finally switch.
This is exactly how I did it this time.
Although Debian 5 lenny had been released in Feb 2009, it didn’t yet make it into Host Europe’s 4.0 line of virtual servers; instead, that virtual machine was still based on 2007’s Debian 4 etch, which received its last kernel update, still 2.6.18, by the provider in Aug 2011. I upgraded it to Debian 6 squeeze nonetheless. I noticed that this year’s Debian 7 wheezy does not run under an etch kernel (especially libc6, rkhunter and aide), and as squeeze is already oldstable since May and will no longer be maintained by next May, it was time to perform an upgrade.
Now I run an instance of their 7.0 line with the same price, but RAM and disk space were both doubled (to 2 GB and 100 GB, respectively). It is still based on squeeze with a 2.6.32 kernel; based on my experience, I expect it to run wheezy and jessie before I have to switch again (in about another four years).
Like previously (and like seven years ago), I did the TCP forwarding using rinetd, except for Postfix, for which I set up relaying again.
Tuesday, January 5. 2010
Offenbar hatte nicht nur ich ein paar Erlebnisse, weil Debian die Sache nun von sich aus korrigiert (bzw. „workaroundet“) hat: Build identifier: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.5) Gecko/20091123 Iceweasel/3.5.6 (like Firefox/3.5.6; Debian-3.5.6-1)
Wednesday, December 9. 2009
Almost three years ago I migrated to a virtual server at HostEurope.de. It was a real relief to not care for any hardware anymore, and I’m really satisfied with their service which includes monitoring and restoreable snapshots. The only major problem I had was when I once tried to upgrade the C-library on an incompatible kernel version—I learned to use Debian Stable on servers. Minor problems however arose once in a while when I hit the privvmpages (private memory) limit. As only 256 MB RAM were guaranteed for my package in their 2.0 line at €15/month, I upgraded to the 512 MB package for €20/month a few months ago, which was a smooth single-click task. As they now already introduced their 4.0 line, I upgraded to 1024 MB for only €13/month. But I had to do the migration to a parallel machine and had only one week to accomplish this. I’ll never do it this way again, however, rather pay for two servers for a short time and decide when to finally switch. And as the monthly fee has now decreased, I had to pay €10 for this “downgrade” anyway. I had planned to simply sync /etc, /usr, /var and /home to the new system to have a nonetheless smooth migration. But the new system turned out to be on 64-bit. So it took me more time to do a migration by hand, although I had asked their service in advance if that were possible. A further drawback was that I couldn’t keep the system’s RRD-files, as they seem to be platform-specific as well; that meant that all system log counters were starting freshly, as I was too lazy to export/import their data. A WTF-situation arose when I noticed that the system had various server packages installed but was missing their symlinks in /etc/rc*.d and cron tabs in /etc/cron.*. I had to compare those with my old system. phpMyAdmin wasn’t working anymore as it suddenly needed a localhost directive for MySQL in the config. That took me some time to find out. Finally, ajaxterm didn’t launch in --daemon mode. That took me some time as well. As a quick hack I now start it without --daemon but with /usr/bin/nohup to the background. I also had to take care that /etc/hosts is now dynamically created/overwritten at boot time. In my /etc/init.d/hostname_vps I now copy it from /etc/my.hosts. For TCP forwarding I used rinetd and set up Postfix relaying like previously. An interesting detail is that I moved from a 2×1500 MHz machine to one with 16×141 MHz.
Thursday, July 16. 2009
This official Debian manual explains how to set up an SSH server in a chroot. However, and although it was last modified in March 2009, the manual appeared incomplete to me. Here are a few additional steps to consider: The manual uses makejail (and the config /usr/share/doc/makejail/examples/sshd.py) to automatically set up /var/chroot/sshd; the script uses ldd calls to find and copy the necessary libraries and files. However, its work is incomplete: You can’t launch the chroot’s Bash. Even /bin/ls doesn’t work. Using ldd I found out that /lib64/ld-linux-x86-64.so.2 is missing in the chroot. To use an elegant /etc/init.d/ssh-chroot script to control the chroot’ed daemon from the host system, you need to make /sbin/start-stop-daemon available in the chroot. You can then use /etc/init.d/ssh as basis for your init-script. Note that the chroot-SSH takes its config from /var/chroot/sshd/etc/ssh/sshd_config; it is possible to have both the native and the chroot’ed SSH daemon listen on port 22, but on different IPs. The manual mentions that proc must be mounted in the chroot as well and that syslogd should also lay a sock in there. But it doesn’t mention that devpts must be mounted in /var/chroot/sshd/dev/pts. Add this to the host’s /etc/fstab with the options noexec,nosuid,gid=5,mode=620; make the tty group available in /var/chroot/sshd/etc/group! If you make strace work in the chroot, you can find out via ~# chroot /var/chroot/sshd
/# strace /usr/sbin/sshd -d and looking into /var/log/auth.log that the /etc/pam.d/common-* stuff is missing. Having considered this, login should finally work if you have users and groups in /var/chroot/sshd/etc/{passwd,shadow,group}. You might need the coreutils in the chroot; you can install them using the makejail config mentioned above.
Tuesday, March 3. 2009
Debian’s standard kernel 2.6.26 has a little drawback: The coretemp module doesn’t recognize Intel’s Core i7 processor. The sensor chip W83667HG of my Asus P6T Deluxe (LGA 1366 socket) is not yet supported as well. This is a typical symptom when using Linux on too recent hardware. However, if it’s enough for you to read an aggregated CPU temperature instead of eight individual core temperatures, you can force loading the w83627ehf module with # modprobe w83627ehf force_id=0x8860 and tune the /etc/sensors3.conf to get rid of false alarms. Luckily, for the current prerelease 2.6.29-rc6 there’s a very recent bunch of patches available that brings support for W83667HG into the w83627ehf module, whereas coretemp already finds the CPU. Follow this guide on how to compile and install a new kernel the Debian way. A drawback is that source modules from the official Debian repositories might not compile anymore, e.g. the MadWifi modules. You have to get them from the project directly (via SVN). You also need to build a current version of lm-sensors (via SVN) to correctly gather the values from w83627ehf. You could then visualize the values e.g. with gkrellm. Now have fun stress-testing your system with MPrime. Update 03/24: It appears that these patches didn’t make it into the final 2.6.29 release. Update 04/09: The patches went into 2.6.30-rc1.
Sunday, March 1. 2009
Because it’s a pain, a short reminder for myself on how to connect a Bluetooth capable cell phone with a Debian box: - Install bluez-utils.
- Read this (German) guide and follow instructions, but ignore the obsolete stuff about the pin helper; today it’s just an entry like passkey “1337”;
- For whatever reason, hcid doesn’t launch a passkey-agent automatically. For the very first connection, you have to do it by yourself:
- Get both the passkey-agent.c.gz source and its Makefile from the bluez-utils examples directory and compile it; you’ll need libdbus-1-dev.
- Launch it as /path/to/passkey-agent 1337 HA:RD:WA:RE:AD:DR and go to a different shell.
- cat < /dev/rfcomm0 and enter the requested pin on your cell. If nothing special happens, you’re done.
In your cell’s Bluetooth settings you could now permanently authorize your PC. The next cat < /dev/rfcomm0 shouldn’t initiate a pin request anymore.
Friday, January 16. 2009
About ¾ of a year later I did my next try with installing NVIDIA CUDA on Debian lenny, mainly because I wanted to try GpuCV, a GPU-accelerated computer vision library that’s partly compliant with OpenCV. Debian is still not officially supported by NVIDIA, but the finally upcoming release of lenny and NVIDIA’s support for the rather recent Ubuntu 8.04 (2008/04) have a very positive effect: CUDA 2.1 Beta works out of the box, and this with lenny’s GCC 4.3! The only thing I had to consider is to install libxmu-dev and libc6-dev-i386 (for my 64-bit CPU) to make CUDA’s examples compile. Also, in order to actually execute the examples, one has to rely on the NVIDIA driver version 180.06 that CUDA provides, whereas even NVIDIA’s version 180.22 fails to execute the OpenGL examples with the message cudaSafeCall() Runtime API error in file <xxxxx.cpp>, line nnn : unknown error. With CUDA working I could then think of compiling GpuCV from SVN. But the build relies on Premake 3.x, which is not available in Debian and has to be installed in advance. In addition, the package libglew1.5-dev is needed. Some more stumbling blocks were that I had to define the typedef unsigned long GLulong by myself. Also, and IIRC, the provided SugoiTools of GpuCV didn’t link, so I fetched and compiled them from SVN as well, and I replaced the .so-files in GpuCV’s resources directory. After that GpuCV finally compiled (except the GPUCVCamDemo, as I don’t have the cvcam lib installed). Including the lib/gnu/linux paths into the $LD_LIBRARY_PATH, the GPUCVConsole demo finally runs. The next step will be to actually use that lib.
Wednesday, July 9. 2008
In short, and for non-computer-geeks: SSL certificates are used for data encryption, to securely transmit private data over the internet. You probably know web URLs starting with “https” instead of “http”. OK, now, those secure web sites may run on a machine that use the Debian GNU/Linux operating system. Unfortunately, Debian had a problem with generating such SSL certificates. They were recently uncovered to not secure your private data anymore. Luckily, a fix came out quickly, but there are still many websites using the broken certificates. You want to be alarmed whenever you stumble over a website that still uses one of those bad SSL certificates! You want the SSL Blacklist add-on for your Firefox web browser!
|