Entries tagged as programming
Wednesday, October 31. 2018
Since ham radio call signs can contain both the letter O and the digit 0 at the same time, the zeros are often represented as a slashed zero in print. But because there's no ready-hand (Unicode) character for this, the letter Ø is used most often instead, since one cannot always use a monospace font which already represents the zero in slashed form. (This problem could also occur in other cases, such as printing hashes or codes.)
There's a lot of ham radio logging software outside that supports a workflow for printing QSL cards or label stickers, and I'm not sure if any of these supports the replacement of zeros with Ø before print—only in call signs, and not in other data such as date, time or frequency. The software I use on Ubuntu, CQRLOG, exports QSOs into a CSV file that can be read in by gLabels to perform or prepare the actual print. I wanted a step in between that transforms that CSV file by substituting that Ø only in call signs. It took me some time, but here's my solution (showing a bogus CSV line):
$ echo "30-Oct-2018,12:04,AB0/K100ABC/P,14.104,F0und AA2/YZ300ZY/0" \
> | perl -pe '1 while s|(.*[A-Z0-9][A-Z0-9/]+)+0([A-Z0-9/]*.*)|\1Ø\2|g'
30-Oct-2018,12:04,ABØ/K1ØØABC/P,14.104,F0und AA2/YZ3ØØZY/Ø
I stuffed this regex pattern into this alias:
alias qsl0='src=$HOME/path/to/qsl.csv; \ cp -p $src $src.orig; \ perl -pe \ "1 while s|(.*[A-Z0-9][A-Z0-9/]+)+0([A-Z0-9/]*.*)|\1Ø\2|g" \ < $src.orig > $src'
Update: Fixed the pattern to allow prefixes that start with a digit.
Thursday, December 14. 2017
A roaring went through the tech-savvy finance community when Yahoo suddenly shut down its Finance API on Nov 1, 2017. Dozens of libraries and scripts that had parsed HTTP output from this API for years ceased to work. No longer could libraries such as Finance::Quote query prices of stocks, funds and currencies and hand these over to other applications such as GnuCash. By using Finance::Quote—I even wrote an extension module for it—I was affected as well, and finally I stumbled upon a solution: Version 1.41 recently introduced an Alpha Vantage module, and querying this source instead of Yahoo is straightforward.
Unfortunately, using Debian or Ubuntu one has to install this version from source, but it's not that hard to do. And for using Alpha Vantage, you have to register for an API key, but that's also no issue. One of the things that still don't work is currency conversion, as that module is still tied to Yahoo's interface. Alpha Vantage does provide currency conversion through its API, though. Looking at AlphaVantage.pm I recreated a simple Perl logic to perform currency conversion. Note that I haven't packed it into a real Finance::Quote module, it's just a simple Perl script that Works For Me™. For simplicity, I don't do any error handling. It relies on an environment variable that contains the API key. Here's how I did it:
#!/usr/bin/perl -wuse lib '/home/user/lib/perl';#not needed#use Finance::Quote;#debug#use Data::Dumper;use JSON qw( decode_json );use HTTP ::Request::Common;my $pair = $ARGV[0]; # e.g. "XAU:EUR" for gold price in Euro@curr = split( /:/, $pair );my $API_KEY = $ENV{'ALPHAVANTAGE_API_KEY'};$url = "https://www.alphavantage.co/query" . "?function=CURRENCY_EXCHANGE_RATE" . "&from_currency=" . $curr[0] . "&to_currency=" . $curr[1] . "&apikey=" . $API_KEY;$ua = LWP ::UserAgent->new;$reply = $ua->request(GET $url);#debug#my $code = $reply->code;#debug#my $desc = HTTP::Status::status_message($code);my $body = $reply->content;my $json_data = JSON ::decode_json $body;#debug#print Dumper(\$json_data);my %fx = %{$json_data->{'Realtime Currency Exchange Rate'}};my $last_refresh = $fx{'6. Last Refreshed'};$last_refresh = substr($last_refresh, 0, 10); # remove timemy $isodate = substr($last_refresh, 0, 10);my $rate = $fx{'5. Exchange Rate'};print $isodate . ' ' . $rate . "\n";
Wednesday, October 19. 2016
I recently managed to reactivate my dear old Epson HX-20, a retro computer released in 1983 which I used at the end of the 1980’s and early 1990’s to learn programming. Since I even could read in some BASIC programs that were still stored on the micro cassette, I wondered if I could rescue the code directly, without using OCR on printouts or even typing it off by hand. I was aware that I was very lucky that this machine still worked after all those years—the soldering seemed to have been much more rigid back then, it might cause more issues to attempt to run old PCs which are a decade younger! To be on the safe side, I invested into a new NiCd battery pack and replaced the original one.
My research first led me to the machine’s RS-232 output, internally called “COM0”. Someone had used that some years ago to directly connect an HX-20 to a PC’s serial port, using a special cable and some adapters. Sadly, it seems that this is no longer an option, since these cables disappear, and serial-to-USB adapters only seem to work with a certain chip in this case.
Then I stumbled upon the GPL software HXTape, and I was totally baffled: What, the Epson HX-20 had an external cassette interface as well? I knew that concept from our even older machine, the Texas Instruments TI-99/4A. It connected to a special music cassette recorder and encoded data into simple “magnetic bits” onto the tape:
It was quite funny to listen to the noisy sounds when playing the MCs on an ordinary player.
This bidirectional data transfer works over ordinary mono audio cables, one for each direction. And now, it turns out the HX-20 had such an interface as well, and we never used it. But the point is, one could exploit it to decode the audio signals into the original bits and bytes by connecting the HX-20’s “MIC” port to the microphone input of a PC using a simple mono audio cable with standard 3.5 mm jacks! (How tremendous the analog world was! Keep a music cassette lying around in the basement for decades and then just play it. Try this with your ¡Phone!) And that audio decoding is exactly what HXTape is doing.
Continue reading "Rescuing data from an Epson HX-20 to a Linux PC"
Monday, July 26. 2010
Gefunden auf einer Webpage, die u.a. C-Pointer erwähnt. Wer würde mit mir eine Einkaufsgemeinschaft bilden, um ein paar Pointer auf Vorrat anzulegen? Ausreichend Speicher vorausgesetzt, natürlich.
Sunday, May 2. 2010
Besides the fact that it was a pain to find out how ImageMagick’s -average option is to be used, it turned out to operate wrongly. The switch calculates the mean of an image sequence, i.e. the mean color values for every pixel. Say, if you have a bunch of images with file names like img*.jpg and want the average saved into avg.jpg, the command line is:
$ convert img*jpg -average avg.jpg
Pretty intuitive. The problem is that you define the mean image Ī[n] of n images I[k] as
while ImageMagick does it recursively as
(with Ĩ[1]=I[1]),
giving you wrong weights 1⁄2n−k+1 like e.g. for n=8
instead of the intended weights 1⁄n like
.
This misbehaviour can be proven e.g. by taking this sequence of a plain blue, green and red image:
and averaging it with the above command. The result is too reddish:
The solution I found was to call convert recursively like this:
#!/bin/bash
i=0 for file in img*jpg; do echo -n "$file.. " if [ $i -eq 0 ]; then cp $file avg.jpg else convert $file avg.jpg -fx "(u+$i*v)/$[$i+1]" avg.jpg fi i=$[$i+1] done
By this, the average of the above example images correctly becomes gray:
There might be similar problems with the other image sequence operators, but I haven’t examined them. Maybe I should file a bug.
Friday, January 16. 2009
About ¾ of a year later I did my next try with installing NVIDIA CUDA on Debian lenny, mainly because I wanted to try GpuCV, a GPU-accelerated computer vision library that’s partly compliant with OpenCV. Debian is still not officially supported by NVIDIA, but the finally upcoming release of lenny and NVIDIA’s support for the rather recent Ubuntu 8.04 (2008/04) have a very positive effect: CUDA 2.1 Beta works out of the box, and this with lenny’s GCC 4.3! The only thing I had to consider is to install libxmu-dev and libc6-dev-i386 (for my 64-bit CPU) to make CUDA’s examples compile. Also, in order to actually execute the examples, one has to rely on the NVIDIA driver version 180.06 that CUDA provides, whereas even NVIDIA’s version 180.22 fails to execute the OpenGL examples with the message cudaSafeCall() Runtime API error in file <xxxxx.cpp>, line nnn : unknown error. With CUDA working I could then think of compiling GpuCV from SVN. But the build relies on Premake 3.x, which is not available in Debian and has to be installed in advance. In addition, the package libglew1.5-dev is needed. Some more stumbling blocks were that I had to define the typedef unsigned long GLulong by myself. Also, and IIRC, the provided SugoiTools of GpuCV didn’t link, so I fetched and compiled them from SVN as well, and I replaced the .so-files in GpuCV’s resources directory. After that GpuCV finally compiled (except the GPUCVCamDemo, as I don’t have the cvcam lib installed). Including the lib/gnu/linux paths into the $LD_LIBRARY_PATH, the GPUCVConsole demo finally runs. The next step will be to actually use that lib.
Monday, August 25. 2008
The usenet originally only consisted of text messages, but it soon became a way of exchanging binary files as well. As posts are usually restricted to only a few MB, larger archives have to be split up into several parts, each attached to its own newsgroup posting. But if some parts of these posts aren’t transferred correctly to other usenet hosts, the archive parts are broken. Not necessarily if the poster created PAR2 archives in addition to the actual data archive parts. In a RAID-like manner, parity information is placed redundantly into the additional files. Thus, it is possible to recover the complete archive if some parts of it are incomplete. As I downloaded some image archives (No, they were not p’rn) containing pictures in the PCD format that encapsulates six different resolutions of an image, I also had to find out how to explicitly extract the large (3072×2048) resolution out of it. All in all, I created the following (simple) script that - checks the PAR2 files, and if some of the data archive parts are incomplete, it
- tries to recover them, and
- if check (or recovery) were successful, the PCD images are extracted and converted to JPEG.
I did it this way: #!/bin/bash # Verify, includes *PAR2 automatically par2 v *par2 ret=$? if [ $ret -eq 0 ]; then # Extract unrar x *part01.rar cd “$(find -type d | tail -n 1)” # Convert for file in *PCD; do echo -n “$file ” convert $file[5] $(basename $file .PCD).jpg && rm $file done echo Done. elif [ $ret -eq 1 ]; then echo “PARITY CHECK FAILED, TRYING TO REPAIR.” sleep 2 # Repair, if needed par2 r *par2 && echo “You may now rerun this script.” else echo “REPAIR NOT POSSIBLE.” fi
Thursday, July 17. 2008
Sometimes I take AVI videos with my Canon PowerShot, and I process some
of them with a video editor and export them as MPEG. However, those
video formats can’t be streamed, and so I like to convert them to FLV
to enable a YouTube-like streaming in web galleries by a flash video
player. For convenience, I wanted to use a context menu entry for Nautilus, where I could right-click on a video file and select “Convert video to FLV”. Luckily, Nautilus supports to execute arbitrary scripts from the context menu if you simply place them into ~/.gnome2/nautilus-scripts/. The only disadvantage is that it doesn’t check the file type in advance, thus also showing the video conversion entry for non-video files. However, you can include that logic into the script itself. The script is a little more complicated, as I didn’t know how to better parse the file names. Probably I should have used Perl. Here is my ~/.gnome2/nautilus-scripts/Convert\ video\ to\ FLV:
#/bin/bash zenity --question --title “Convert video to FLV” --text “Really?” || exit IFS=$’\n’ for file in $NAUTILUS_SCRIPT_SELECTED_FILE_PATHS; do ext=$(echo $file | awk ’BEGIN {FS=“.”}; {print $NF}’) filebase=$(basename “$file”) path=$(echo $file | sed -e “s/$filebase//”) filebase=$(basename $filebase .$ext) if file -i “$file” | grep -i video >/dev/null; then ffmpeg -y -i “$file” -ar 22050 -ab 32 -b 564k -f flv -s qvga “$path$filebase.flv” & else zenity --error --title “Not a video” --text “Hey, $filebase.$ext is not a video!” fi done
|