Archive for the ‘Technology’ Category

Linux and SmartBoards

6 February 2017

Regular readers may recall that one of my major activities is helping keep the technology going at St. John’s Episcopal School and Church.

We have four SmartBoards in the building.  A full SmartBoard installation requires the board (essentially a large touchscreen), a computer, and projector that projects the computer display on the SmartBoard.  Software on the computer interfaces to the SmartBoard, and interprets touches (as mouse clicks) and swipes (as various line draws, highlights, etc.).  The swipes are usually overlayed on whatever the computer is displaying.

The computers we have driving the SmartBoards are very old, 2004-vintage, and running XP.  I did an XP install from scratch on one, and it sped up a little, but it still would not play videos, and the SmartBoard drawing was very sluggish.

A buddy of mine from the Omaha area (thanks, Stan!) donated one of his computers to St. John’s.  It’s a dual-core 3GHz machine with 8GB of memory.  I decided to replace one of the SmartBoard computers with this one.  The license tag was for Vista.  I decided that since SmartBoard supported Linux, that’s the route I would go.

The requirements stated by Smart was a 1.2GHz machine with 1GB of RAM, and Ubuntu 14.04. I had the most recent Ubuntu 16.04 on a USB stick, so that’s what I used.  The install and setup were smooth, as expected, as this was the eighth computer I have installed 16.04 on.  Then I noticed in some fine print on the installation errata that the SmartBoard drivers would only work on a 32bit (386) architecture.  Well, crap, the install I just did was for a 64bit architecture.  So off I went and downloaded a 32bit version of Ubuntu 16.04.  That install was very smooth as well.

I had to go through a lot of gyrations with Smart to get a product key to allow me to download the Smart Notebook software.  I had registered one SmartBoard with them back when I first installed it, and while I registered the other three in the process, only that first registration got me an authorized product key (although, the terms for that key stated that the software could be installed any where in the building.  Whatever.).

I unpacked the Smart Notebook software and drivers, and started reading the installation instructions.  The files were in .deb archives, which are usually very straightforward to install.  There were a lot of instructions from Smart about setting up PGP, running scripts with their key and my key to sign the archives prior to installation, and the first time I followed their instructions to the letter, the process immediately failed with NO explanation except “signing failed”.  Hmmmm…

After about two seconds of thought, I said THWI, and started installing the .deb files as they were unzipped.  I did try to do this in a reasonable order (the common files first, etc.).  All reported installed successfully. Usually after this, I would try to start the service that would be installed, but I didn’t see anything like that in top, so I just restarted the whole computer.  When I logged in again, the status light on the board was and solid, which indicated that the board and computer were communicating.  I did some pokes at the board, and darned if the thing wasn’t working.  I aligned it, and all was well.

I fired up the Smart Notebook software, and got a splash screen, but nothing else.  It sat for a while, still nothing.  I went to the terminal I had open, and any command reported no child processes spawned, which is usually an indicator that all resources are sucked dry.  I restarted again (at least graceful restart was still there), got on terminal and saw the usual stuff I would expect (along with SmartBoard drivers, very cool), and then fired up Notebook again.  This time, ps -x showed more new processes spawned that I could keep up with.  When they got up to 20,000+, the machine basically threw up its hands.

I went off to research.  While I found the same question on a number of forums, the answer was on the very bottom of the errata sheet for Notebook 11:  Notebook will not work with Unity, which is the default desktop of Ubuntu (and was for 14.04, which is the baseline for Ubuntu for SmartBoards).  I installed a Gnome desktop, restarted with that one, and fired up Notebook, which ran perfectly.

I would say that Notebook running wild under Unity is a major bug that should be addressed by Smart.  I don’t think they will; the latest Notebook for Linux is 11, and the Windows version is 16.

Regardless, my favorite teacher likes the new computer, is comfortable with Linux, and likes that the new machine can run SmartBoard programs, annotate documents, and all the other cool stuff that SmartBoards can do.  She can also play YouTube videos and stream PBS and news programs for her kids to watch thanks to the zippy new computer.

So I’m calling Linux on SmartBoards a win overall.  Next, I will deploy Linux on the curernt machine on another SmartBoard (a GX270) and see if performance is better than the XP that’s currently on that one.

Some Good Android Connectivity

28 September 2016

Last week, I was in Glacier National Park. I had traveled up there pretty light electronics-wise. I had my Galaxy Tab S2 and my S6 phone with me, and that was about it.

It occurred to me that I had checked the memory use of the phone, and it had about 18GB of pictures on it, out of 32GB. The tablet had about 4GB used of the internal 32GB, and another 16GB unused in an SD card. I didn’t want to run out of space, so I really wanted to transfer the photos from the phone to the tablet. I didn’t have any cables, but I remembered that both had Bluetooth, and that Bluetooth could be used to transfer files.

I talked to Jason, who told me that once the devices were connected, there was a Share With… option. I turned on BT for both, paired them, then fired up the Gallery picture file app on the phone, and there was a Share option, which when pressed came up with the tablet. I highlighted all of the photos and started the share. The first time failed to transfer anything for some reason, but I tried again, and both the phone and the tablet put up status banners “Sharing xxx file of xxx via Bluetooth”.

It took about two hours to transfer the 1500 photos from the phone to the tablet (a little slower than I thought it would be). I just let the devices sit overnight. The next morning, I turned off Bluetooth on both, then looked around on the tablet, and after a bit of looking sure enough there were the photos.

I deleted the photos on the phone (always a bit nerve-inducing), and went off tot he trail knowing I wouldn’t run out of space for photos.

I just looked at a couple discussions of file transfer rates:

Bluetooth: 2Mbps
USB 2: 480Mbps
WiFi (N): varies 7 – 72Mbps

It’s apparent that USB would be the way to go, if you have cables. I may have to experiment a bit, since I’ve got a lot of pictures on the phone again :).

Hilton Hotels Premium Internet, Again

24 August 2016

I recently looked at the relative increase in bandwidth you get by paying for premium internet access in Hampton Inns in the Boston area.

This evening, I’m at an Embassy Suites in west Omaha, and here is the updated data:

Standard and Premium Hilton Internet

The Embassy Suites data is on the right, and you can see that the effective bandwidth is doubled, from under 3Mbps to 6Mbps+.  However, in every case from the Hampton Inns, the cost of premium service was $4.95, and here at the Embassy Suites, it is $12.95.

While the Embassy Suites premium is a lot more expensive, it’s not a proportionate increase from the Internet offered by the Hampton Inns.  Yes, it is a larger hotel, but then again, they have a lot more rooms to fold costs into.  I’m not sure a more-than-double cost that doesn’t net a significant increase in bandwidth is worthwhile.

Hampton Inn Premium Internet Comparison

7 August 2016

Since I stay in a lot of hotels on business and personal travel, I tend to get Internet from the hotel a lot as well.

This past trip, we were in a series of Hampton Inns on the back end of the trip.  Since I had three laptops and two tablets (and occasionally a couple phones) we had a lot of connectivity going on.

I decided to do a bit of comparison of the devices and internet speed.  For the speed checks, I used the reliable DSLReports.com.

A couple ground rules.  First, I established that my tablet measured the same speeds as my laptop.  Testing was done one device at a time, when the others were not doing any high-bandwidth stuff like streaming.  The three laptops were always within a couple Kbps of each other when testing.

Given all that, here are the cumulative results of five tests I did, at different points during the evening, late evening, and morning.

Standard and Premium Hampton Internet

The first question I had was “do you actually get more bandwidth with the premium service”.  The answer is pretty clearly, yes.  The premium charge was $4.95 per day.  In two of the cases, there is a small increase, but in the other three cases, it was significant.

I was surprised at the amount of bandwidth being used for uploading.  Most network transactions on the web are a short request, followed by a lot of data coming back.  It seems that some of the bandwidth that is available for uploads would be better used for servicing the downloads.

I think my reaction is that if you need to maintain a stable VPN, or you are running big downloads like Exchange, the premium is usually worth it.

Something I Don’t Get About Online Ads

1 July 2016

I’ve noted many times that visits to some sites suddenly gets you targeted by ads for those sites at other sites.

An example: we needed a refrigerator to supplement our aging unit. I looked online at lowes.com and several similar places.

As is my habit, when I was done looking at refrigerators, I closed the tab and opened a Facebook tab. I was immediately shown a set of ads for refrigerators from Lowe’s that included the units I was looking at. So that’s interesting, of course, a close to real-time sharing of ad information that managed to be tied to me personally.

This went on a couple days. I would occasionally return to lowes.com (and to other sites) to look at refrigerators. Eventually, I settled on one, and it happened to be a Lowe’s unit. I went to lowes.com, found that unit, and then went through the order-and-pay sequence, which included setting up delivery to my house. I was pretty impressed by how easy it was.

That was last Thursday. Since then, I have been followed by the same Lowe’s (and a couple of other companies) ads as I have moved around the net.

And that’s what made me wonder. Lowe’s clearly could tie me as a visitor to their site (and interested in an item or type of item), and then could tie me to Facebook, and CNN, etc. to show me that ad on those sites. Why did Lowe’s not also recognize that I had bought the refrigerator, and then either stop showing me the ads for that unit (and others, since I am unlikely to buy two refrigerators), or maybe show me ads for ice makers or other refrigerator accessories, or maybe related items like a new oven?

Don’t get me wrong, I don’t really want ads, but OTOH, they help pay for my internet experience, and so they are a necessary thing. But if I were a marketer, I would try to use the info I have to try to get people to buy more crap as possible.

An Amusing Email Loop

21 June 2016

Last week we were in Dallas. I was on a business trip, and I took the family with me so they could have a bit of recreation while I was working during the day.

Per company policy, I rented a car to drive down. I was given a Nissan Rogue, which is the first time I’ve driven one of those.

At some point, as is my usual practice, I connected my phone to the car using the cars Bluetooth. I get a lot of calls, and tend to take them in hands free mode.

Here’s where the amusing part comes in. My work email is set up to send me a text message when I receive a message. The text has enough of the message for me to determine how to prioritize answering the message. So I’m driving along, and I get an email in at work, which generates a text message. I check it out, not a priority.

Then I get another text message about 15 seconds later. Then another, then another… this went on for a couple minutes, and each of the messages were the same: “Sent from my car”. The rate sped up as I received about four messages in a five-minute span, so I was getting a LOT of texts. I turned off Bluetooth and it stopped the cycles.

So what happened: My work email would receive a message, and send me a text that I had a new email. The car radio would auto-reply back to my work email with a text-to-email gateway using my phone number with the “Sent from my car” message, which would arrive at work, and generate a text back to my phone. Naturally, my work email had a lot of these messages piled in my inbox.

I just left my phone disconnected from the Rogue Bluetooth (hah, sending “rogue messages!) for the remainder of the trip. I haven’t looked to see if there was a setting in the car to turn off the auto-generated reply.

One comment on that: the user interface into the Rogue display was very poor.

Perils of technology…

Running A Laptop As A Virtual Machine

27 May 2016

This is a post I started back in March, I’m just now finishing it :).

I’ve been carrying a series of work-issued laptops for more than 15 years.  About six years ago, the Air Force issued me an HP 6930p.  It is a workhorse and worked well for me.  My company issued me an HP 6570b back in December, and after I changed contracts in March, I turned the 6930p back in with some regret.

I had previously backed up all the work files from the 6930p to the 6570b, that was easy. But I had some apps I wanted to have access to on the 6930p that I could not transfer. Since I didn’t want to buy an aftermarket 6930p, and I sure didn’t want to carry two laptops, I decided the best way to keep those apps around was to virtualize the 6930p.  I did some research and decided I would install a second disk in the 6570b.  I bought a laptop 2TB disk and a drive carrier, and started experimenting.

The first thing I did was install disk2vhd on the 6930p.  I told it to capture the disk, and off it went, this was about 2100.  From the progress bar, it looked like it would take about three hours to capture the disk.  I let it run overnight.  At 0200, W10 installed updates and rebooted, so that killed the capture.  I started it again the next afternoon, and at 1700 it was still running.  I carried the running computer out to my car while it kept capturing, and it was still capturing at home at 2300 when I went to bed, and at 0700 the next morning.  Hmmm…

I killed the process, and went looking for info.  Turns out that is common behavior by disk2vhd.  OK.  I noted the vhdx file was about the right size, and so I though WTH and tried to mount it.  Windows told me it was already mounted.  It would not un-mount.  That meant I could not copy it.  I restarted the computer with System Rescue CD, mounted the W10 drive, and copied the vhdx file off to a thumb drive.  So far, so good.  It was interesting that Windows found the vhdx file and auto-mounted it.

A note, I tried the disk2vhd program several times to try to get it to terminate.  I tried changing the output to vhd, and several other things (it’s easy to let the machine run overnight for tests like that).  Disk2vhd never properly terminated, but it still produced good files.

In the meantime, I was getting the 2TB drive ready.  I decided I would like to have my old friend Fedora running again, so I downloaded Fedora 22 and installed it.  But, it would not get the laptop wireless working.  Yum didn’t work at all (weird, that one).  A couple of other devices were not working.  I played with it a couple days on and off, and eventually got annoyed, and downloaded Ubuntu 14.04 workstation (I run 14.04 server on the school server, so that was a good match).

I had to use diskpart to hammer the existing Fedora installation install, for some reason Ubuntu wouldn’t overwrite the disk.  I built Ubuntu, and at the very end, it noted that it was installing GRUB.  I booted Ubuntu and it worked great, all devices worked, looking good.  Except, Ubuntu or GRUB had reached out to the other physical disk and wiped it out, very annoying.  I got that disk fixed, then came back and re-installed Ubuntu on the 2TB disk with the W7 disk completely removed from the computer and locked into a lead-lined vault (just kidding about that last).

I downloaded VMWare VirtualBox and installed it.  Then I copied the vhdx file over to the Linux disk.  I tried starting it, and VirtualBox helpfully told me to change the BIOS setting of the computer to support virtualization.  I rebooted, made the BIOS change, got back into VirtualBox, and started the VM, and… it started.  Just like that.  Just like that.  Whoa.

I was presented with my W10 login screen, and logged in.  There was my W10 desktop, surrounded by Linux.  Weird, and cool.

When it started, VirtualBox had showed several messages about keyboard and mouse capture, but they both worked equally with Linux and the VM.  The VM was connected to the wired network connection that Linux had, no problem (and I found later that it worked great when Linux was on a wireless connection as well).

There are a couple oddities.  The video driver that the VM uses isn’t the 6930p video card, so instead of a 1280×800 display (wide) I get a 1024×768 (I looked very briefly about installing a virtual driver but didn’t follow up).  One app (my Garmin Basecamp GPS mapping tool) complains that it can only run in 2D mode instead of 3D mode due to the video, but I don’t notice any difference.

W10 boots a little slower, but once booted it runs pretty darn fast.  I haven’t been able to get the VM to recognize USB drives.  Linux and VirtualBox recognize them, but the configuration setting doesn’t pass the drive through to the VM.  I’ve made up for that by using Google sites and Google drive to pass data into and out of the VM.

In the VM, I used a license crawler tool to get the MS Office license, then I removed Office (I use Office on my W7 laptop, and LibreOffice in Linux, no issues transferring between the two).  That Office license will go to upgrade Raegan’s office on her desktop.

I don’t know how long I will use the W10 VM.  I made an effort to ensure that my Ubuntu would do the same stuff that the W10 would do.  There have been two things I’ve had issues with, one is a replacement (or rehosting) of the Garmin Basecamp tool, and the other is a tool to convert a series of JPEG images captured from a wireless camera into an MPEG format for viewing.  I have access to an XP machine to do that right now, and it works OK.

So the virtualization effort was pretty painless.  The VM, when it is running, doesn’t impact my Ubuntu performance.

I might virtualize my W7 installation and see how it works next…

Adventures In Ubuntu, VMs, and GPS

21 April 2016

NERD ALERT:  Nerdy talk follows!

Since I switched my HP laptop to Ubuntu Linux, I have made a fairly smooth transition in terms of software. I can get company email via webmail (using a security token for the connection), even though the webmail is Microsoft Outlook Web Access and the browser is Chrome. In the past couple days, I’ve used LibreOffice to build briefings, create documents, and read stuff for work, used various Google apps to transfer files around, and generally had a problem-free transition. There are a couple nits. One thing that sounds silly, I edit pictures quite a bit. In Windows, I could use Paint to add text and draw lines that are pointers. In Linux, GIMP does the text just fine, but it doesn’t draw lines. I’ll figure that out.

The one thing that’s weird is working with GPS files. I do a lot of GPS work for planning hiking and backpacking, and then downloading the saved tracks from the trips. Those require a bit of editing to clean them up, join tracks from each day, and the like.

We just got back from a nice trip to Eastern Oklahoma, and it was a bit of an effort to get the tracks out of the two GPS units. I carried a Garmin GPSMap60, and Ian carried a Garmin GPS62s.

I’ve tried a couple Linux tools to extract the tracks (via a USB connection), and had trouble getting them to recognize the devices. I also tried to install the Garmin Basecamp tool I’ve used forever using Wine, and had no luck. One tool (QmapShack) I tried to install from source, and between requiring a specific version of cmake and other oddities I couldn’t get it to work. I tried installing the Windows version, but it requires the Visual C redistributable, and that wouldn’t install. So that was just Too Hard.

BTW, the command I used was:

gpsbabel -t -i garmin -f usb: -o gpx -F [trackname.gpx]

In the end, I decided to use the Basecamp tool that was in the Virtual Machine of my previous HP 6930p, which I had brought into Virtual Box under Ubuntu. The problem was trying to get the GPS tracks to the VM. I tried some stuff to make the GPS units visible to Basecamp under VirtualBoxm, no way. With the 60, it took an obscure command line using GPSBabel (which was installed on the computer when Ubuntu was installed to get the track data our and into Linux. The same didn’t work for the 62s. Turns out the 62s mounts as a USB stick as far as Ubuntu is concerned, and the track data is in a folder a couple levels deep.

So now I had the files, but still needed to get them to Basecamp. USB sticks were tried with no luck. I’m pretty sure the stick(s) were visible to the VM, but they didn’t show up.

In the end, it took a roundabout way. My laptop had Apache installed on it. I made a connection to WiFi (that got an IP address for the laptop). Then I copied the two GPX files to the root of the web server and started Apache. I went to the VM, fired up a Windows command prompt, and could ping the IP address the laptop had from the WiFi. I fired up Chrome, typed the IP address, added the filename of each GPX. That got them downloaded.  They came in from Chrome with an additional xml extension (so they look liker gpsmap60.gpx.xml), but a rename fixed that.

Then I fired up BaseCamp and imported the tracks, and editing worked well.  Once the tracks were in and edited, I displayed them on a topo map, and as an altitude plot.  In both cases, I did a screen capture of the display that included the Windows VM, and the capture was saved in the pictures folder of the Linux box.  From there, I brought the captures up in GIMP for annotation, and from there they went to Google+ with the photos I took on the hike.

This was all pretty cool and easy for me, but I think for a non-geek it would have been sorta hard.

Hijacked!

8 April 2016

Well, crap. A couple days ago, I noted that the St. John’s server was acting very slow. I waited until the evening to check on what was happening, and around that time saw a huge number of email bounce messages from various email providers like AT&T and Cox. Something was wrong.

I quickly found out that St. John’s was being the source of thousands of spam messages, headed all over the globe.  I killed the mail server program Postfix, and the spam stopped, and the system sped up significantly.

I spent a couple days on and off trying to find where the spam was coming from.  I did network sniffing at both the external and internal network cards, but all I found was the normal traffic I would expect (i.e. nothing was feeding the server from either the big bad Internet, or from inside the building).

It quickly got to the point where no effective email service was available due to our being put on a couple blocklists.  And the CPU on the server, which is also a router to get people in the building out on to the Internet, was being eaten by the bot which was clearly running inside the server.

Now, there are many thousands of Windows malware, including virii, bots, and the like.  There are only a couple that affect Linux boxes.

I had been working with our ISP (Cox) on this.  I had one hint from them, that we had the Alureon (AKA TDSS) virus.  They also gave me an IP address for the virus command server (a computer in Russia).  I blocked that IP address for both sides of our connection using IPTables.  But Alureon is a Windows virus, not Linux.  I download a tool to check and literally hit every machine in the building, nothing.  So that left a couple laptops.  But I don’t think that this was a valid hint, as the spam kept coming even when I pulled the RJ-45 out of the building network connector.

One of the blocklists told me that I had a Grum botnet client.  Again, it’s a Windows based bot, so who knows.

Finally, I gave up.  I had read over and over that rootkits on Linux were nearly impossible to find and eradicate.  I shut down every service on the computer, pulled the config files and logs off, and then wiped the machine and reinstalled Ubuntu.

I went back and got the basic machine running, created users, changed every password.  I ensured that I had a good firewall running, but setting up IPTables to let only a certain number of ports through, and zorching everything else.

Next I got the email back going.  When I installed, I had specified a mail server (Postfix) and a LAMP setup.  While those were helpfully running after install, I shut them down (except the web server).

I had installed several packages from the Ubuntu software center in the week or so leading up to this, so naturally I wondered if that was the attack vector.  I have not reinstalled those packages.

I spent time last evening and today working to get us off the various blocklists, and that seems to be going OK.  When I get some time, I am going to try to look through the logs and determine where the attack came from.

I’ve always had a great deal of faith in Linux (in fact, I recently switched my laptop to Linux only and have been very happy with it), but this incident has me a little paranoid.  One thing I will do in the next day or so, when the system is quiet, is to clone the drive so that I can restore it quickly if this happens again.  I will also do some research to see if I can find out what I might have missed while setting things up and running them.  I also need to get the extra stuff going I had before.

Another Example of Amazing Google Integration

23 March 2016

I was running errands in Salt Lake City yesterday, and saw two examples of pretty interesting integration that Android performed.

To set this up, I booked the trip up here Saturday afternoon.  As I always do, I emailed the reservation information from our company travel booking system to my personal email, which shows up on my phone.

The first example of integration was noticing that my Android-powered Galaxy S6 had apparently raided my email and extracted a pair of .ics (calendar) files, and put the calendar entries in my phone calendar.

Now, I have twice sent suggestions to American Airlines related to this.  When you ask American on their website to send you .ics files for a reservations, it sends one .ics file for each flight on the itinerary.  Say the flights are on 12 and 15 April, and are at 0830-0930 and 1015-1245 for the outbound flights on the 12th.  American sends four ics files that have the entire itinerary in them, and the dates are at midnight in every case.  Not very useful.

Android parsed out the exact flight times and put those in the calendar as separate entries, which is much more useful.

The second example of integration:

Screenshot_2016-03-21-16-21-51

I had fired up Google Maps to find a Target store in the SLC area.  Note the two markers for the Hilton Garden Inn and the Salt Lake City airport.  The dates of my stay at the Hilton, and my flight departure date and time at the airport, are correct.

So Google noticed the email with .ics entries, and was able to parse out the information, stash it in my calendar, and then associate it with Google Maps, without any input from me.

I find that pretty darn amazing.  I have felt for some time that location-based data is one of the best applications of technology, and this is a fine example of how location-based services can be useful.

Automated Software Installation

3 March 2016

I should not have been surprised by this, once I thought about it a bit.

At work, I was told that I needed to have Microsoft Project installed on my computer so I could build schedules. OK, I thought. I already had Office 2010 on my computer. I went to a “Software Store” on the company Intranet, and found a list of available software, including Project.

There was also a lot of Open Source stuff that was listed as approved for use on company computers and networks, which I thought was cool. There were development environments like Netbeans, and support tools like PuTTY.

Regardless, I selected Project, typed in a short justification (more than “I was told to” 🙂 ), and submitted the request. I figured that after a couple days, a tech would come around, or remotely log into my machine, and do the install.

Instead, about five minutes later, an automated process popped up a window telling me that installation was about to happen. I quickly shut down the couple apps I was working on, and the automated process installed Project, some other patches, and support tools, then it rebooted the computer, and that was it. I was pretty amazed. It took about 10 minutes.

I should not have been surprised. When I think about the extensive updates that Microsoft has been pushing out for years (think patches for XP, or doing a complete unattended upgrade from Windows 7 or 8 to Windows 10), it occurred to me that doing the installation of a single new software would be pretty easy.

It was cool regardless.

Someone at Microsoft Was Not Thinking Clearly…

29 February 2016

I have Windows 10 installed, and when I want to look at a photo, say one that is saved on my phone, the Microsoft Photos app is the default viewer.  It has a neat little editor with a couple functions.  The one I use most often is to crop.

If you save the cropped photo, it overwrites the existing file.  Understandable.  But if you Save As…, instead of giving you a dialog box and letting you specify where you want it to go (say, on your desktop), it instead saves the file to this folder:

C:\Users\Bill\AppData\Roaming\Microsoft\Windows\Libraries\SavedPictures.library-ms

That’s not terribly user-friendly.  If you see if soon enough, there is an “Open Save Folder” dialog that occasionally shows up, otherwise you are left to root around in the filesystem to find where it put the file.

How about a folder dialog, Microsoft?

“Hacking”, WiFi, and Journalism

25 February 2016

I’ve seen several variations of this story over the past couple days:

Steven Petrow, the journalist who had his computer hacked while on a flight, recounts his experience and what he learned.

Each version of the story I’ve seen has emphasized that the guy had his laptop hacked on a flight. He was using GoGo In Flight for in-flight WiFi.

For the record, the guy, or rather his computer, was not hacked.  He was sorta personally hacked in that he was given misinformation, but his machine was not compromised.  Some of his data was.

Let me explain the difference.  Mr. Petrow was on an airline flight, using his laptop, connected to the inflight WiFi.  The GoGo In Flight is an open WiFi access point.  This means there is no encryption.  Now, when you pay money (via a credit card transaction) to use GoGo, the transaction is encrypted using SSL between your computer and the GoGo server.  Once that’s done, the connection reverts to nonsecured, and you are connected to the Internet.

Mr. Petrow was using his computer to write an article, and submitted that article to his employer.  I’ve seen references to his sending it via email, but the mechanism is not clear.  Near him (and it doesn’t matter if near means the next seat over, or the back of the airplane), a guy was using a WiFi sniffer tool to watch the WiFi traffic.  Since the access point was open (no encryption), the “hacker” (although a better term might be “sniffer”) could see (and capture, if he wanted) every packet of traffic sent to and from the access point.

Now, a point that has to be made here is that anyone who was doing anything sensitive using a server that had even the least security on it would be using SSL encryption, which is between your device all the way to the server.  That traffic can be seen and captured, but it is encrypted, and would take a significant effort to decrypt (by significant, I’m talking years of computation).

So for the hacker/sniffer to see Mr. Petrow’s traffic, the traffic would have to have been unencrypted.  It could have been an unencrypted email (SMTP/POP3 protocol), or an unencrypted webmail.  Regardless, both email servers and clients, and web servers and web browsers, have had basic encryption built into them since the early 2000s.

So the hacker/sniffer saw the email with the article that was sent unencrypted.  The hacker/sniffer did not attack or tamper with the computer Mr. Petrow was using.  That is not being hacked, it is being eavesdropped on.

Whoever Mr. Petrow works for, their IT department should secure the server that the company uses to implement an encrypted link.  All major email servers and clients support encrypted connections.  All major webservers and browsers support encrypted connections.

So as to the sniffer/hacker, what he did is trivial from a technology standpoint.  I’ve used similar tools to look at WiFi traffic, on airplanes and elsewhere.  You might not be surprised, but while in hotels, I have seen examples of half of the connections being to porn sites.  Using sniffer tools, you get an idea as to why hotel WiFi is often so slow, when most of the connections are to streaming video sites (think porn, and Netflix, and Hulu).

The above might sound frightening, but I think most businesses that have an interest in keeping customer information safe (think banks) implement end-to-end encryption as a matter of course.  A news site like CNN might not care to encrypt the connection a site visitor is checking out, though.

The real issue here is that the story being reported is wrong.  It’s not a case of hacking, it’s really an example of not implementing best practice for securing data.  And that is something that is easily fixable, once you realize what the real problem is.

Apple, Terrorist Investigation, and Security

23 February 2016

I have been following the dustup between the FBI and Apple with interest.

For the record, I am opposed to any police or government agency having access to the daily communications of people who are not under investigation. I also do not think that cracking this phone compromises the millions of other iPhones in existence.

On the other hand, in the case of the San Bernadino terrorists, there is a crime that has taken place, with a resulting legitimate investigation, and the police agencies have a legitimate requirement to access data about the criminals, including any information stored in their phone. I imagine that the records of who the criminals called has already been gathered from cell phone companies.

Apple and Apple’s supporters use the argument that any hack/engineering/software tool that Apple uses should not be in the hands of the police agencies. I agree with that.

I also suspect the tool to crack an iPhone PIN already exists.

I think that there can be a reasonable approach here. Chain of custody must be maintained. The FBI can swear select Apple employees in to an agreement that they will not disclose anything they happen to see in the phone. The FBI can send people with the phone to observe the crack process and ensure that the Apple people don’t zorch any information. Then, with the process complete, the FBI takes the phone back and harvests any information from it. The crack tool/process/person stays with Apple.

The key thing is that the tool/process remains with Apple, while the data goes to the police for investigation.

There is really no difference here in any other investigation or data request. I do think that Apple’s concern to protect all of the other iPhone users is admirable, and should be taken seriously.

Old West Cafe, Sanger, TX

13 February 2016

Old West Cafe Menu, Reviews, Photos, Location and Info - Zomato

As I headed from the Dallas area to OKC this afternoon, I was in Sanger, TX just after noon. I saw the Old West Cafe on Google and decided to check it out.

I got there at 1245, the place was about half full. I ordered the CFS with mashers, gravy, and pinto beans. The mashers and gravy were very good. I liked the pintos, even though they had a taste of jalapeno in them. The pepper caused just a little bit of mouth burn, but that went away very quickly.

The CFS has real potential. I would rate this one an 8 out of 10. It was fork tender, and had pretty good flavor. It was clearly not pre-made. The problem I had with it is that it had been overcooked a bit. The breading was quit stiff and a bit hard to chew. I think it needed about a minute less cooking time. The CFS here is likely a real winner if it’s not consistently overcooked.

The iced tea was very good, and service was fast and very friendly. This place was slightly unusual in that all ordering was done with what looked like a smart phone, and checkout was done via iPad with a Square-like signature capture.

I hope to be able to visit again at some point; the place has potential. My check was $12.21.

GPS Comparison Testing

29 October 2015

This past weekend, I took a group of Scouts from Troop 15 on a 10-mile hike for the Hiking Merit Badge. We went out to Lake Thunderbird State Park, where there are a number of hiking/biking trails that total just under 20 miles.

One of the things I wanted to do was check out the GPS capabilities of several of the units. When we go on these hikes, we typically start a GPS, and hike until it shows 10 miles.

I also had a secondary objective, which was to check out my Garmin GPSMap 62s battery usage. We had taken that unit to our backpacking trip in Colorado, and it seemed to use an entire set of batteries in less than a day. My GPSMap 60 usually gets about five days of use out of a battery pair. In this case, I completely reset the 62s, put in fresh batteries, and at the end of the day, the battery indicator showed full. So that was probably the issue.

Anyway, I tested the following units: Garmin GPSMap 60 and 62s, a Samsung Galaxy S6, and a Google Nexus 5. Both of the phones ran Runkeeper.

Our hike was over a trail network that has a significant amount of weaving in and out, in order to maximize the mileage in the limited surface area.

At the end of the hike, these were the mileages displayed:


Unit       Displayed   GPX   Points Captured
GPSMap 60:     10.02     9.7   1148
GPSMap 62s:   9.97   9.5   1748
S6:           9.90   9.9   1604
Nexus:         8.20   8.2

These results are pretty annoying to me. I’ve noticed the GPX track shortage (by way of example, displaying 10.02 while the downloaded GPX is 9.7) numerous times over the years, but I do not understand why it should be.

The difference is particularly pronounced in the GPX for the 62s. It’s a newer unit, and it has a setting to control the granularity of the data taken. For this hike, it was set to the most granular setting, and generated the largest number of data points, but the reported GPX is a half mile less, which is 5% and significant. I suspect that the better granularity of the 62s is the “real” mileage, as it would capture the numerous sharp bends in the trail network. But that is contradicted by the significantly shorter length of the GPX.

Note the very short mileage for the Nexus. We noticed the mileage displayed being less and less of the other units. Ian checked out GPS settings, and found a power-saving mode that was set that limited GPS update.

Have a look at the ground tracks. Here is an overlay of the tracks of the two phones:

S6 and Nexus 5 Tracks

S6 and Nexus 5 Tracks

You can see that the S6 (green track) and the Nexus (red track), are close together for a bit, then they diverge (the series of straight red lines), then come back together again. It’s easy to see where the longer green tracks near the straight lines are the source of the mileage difference. The question is, why did the first mile+ match up very well, then diverge?

Here are the overlays of the tracks of the 60 (green), 62s (dark), and the S6 (red) (I did not include the Nexus track due to the divergence):

Hiker and phone GPS overlaid

As I look at the tracks, I see about six areas where the green track diverges (in some places, significantly) from the other two tracks, adding mileage to the total. The 62s and S6 tracks (and most of the Nexus tracks) are very, very close.

My conclusion is that the newer GPS units are closer to showing true mileage.

I also looked at the altitude displays. These trails were fairly flat (relatively! 🙂 ). I used Mapsource to generate altitude plots and captured them as identically-sized jpegs, but there wasn’t an easy way I could see to merge them together (I tried GIMP). So instead, I exported the altitude data to an Excel spreadsheet. The number of data points didn’t match, so I wrote a q&d program to read in the data points, and insert an identical value every x lines. That got the dataset lengths pretty close. Then I imported them back into Excel and ran an XY plot:

Altitude_Comparison

I noted previously that the altitude recorded by the GPSMap 60 is way spiky. The 62s are as well, but less so. The altitude differences between the two Garmin units is significant. Three notes: the 60 tends to show altitude significantly less than the 62s for the most part (there are only two places where the altitudes match, and the altitude is 40 or so feet less than the 62s); the altitude shown by the 60 lags the 62s by some amount, and the altitude for the last couple miles is really, really off.

The altitude recorded by the S6 is closer to the 62s altitude, and also lags the 62s, but in both cases not as much as the 60. The S6 is almost smooth, not spiky.

I am going to take the units out in a car and drive about 10 miles and check the odometer reading against the GPS display and GPX. I will report on that after I do it.

I think I need to take all four units in a straight-line test to see how those mileages compare.

A New Server For St. John’s

14 October 2015

As I noted in a recent post, the server for St. John’s developed an odd problem resulting in the essential loss of outside network connectivity. Since it wasn’t the cable modem, the network cable, the NIC, the PCI slot, or the mobo, it pretty much left the disk. Since that would have been a complete rebuild, I thought I would go ahead and make the leap from Fedora 2 to the latest version, Fedora 22.

By luck, the previous week a church member (thanks, Bob!) had donated a Dell XPS 400. Now, that’s not the newest computer around, but it was the newest one *I* had around. I pulled the Windows Media Center 160GB disk out (it was SATA at least), bought a new 1TB SATA drive, and while doing that download Fedora Server 22. I used the Fedora USB tool to build a bootable USB, and started the install.

I had a bug here, and it was repeatable. Fedora got to the point of asking where the packages would come from. Choices were from the boot media, or from the network. I had just downloaded 2GB of Fedora, so I selected the boot media USB boot media. Then I answered a couple others questions, and at that point the boot media selection had disappeared. This odd, but when I selected next or whatever, it started the install, and then proceeded to download all of the install packages over the network. This was a bit annoying, as it took over an hour, and wasted my time.

The rest of the process was pretty smooth, although it did take a while due to the installer downloading all of the packages a second time. After it was complete, the system ejected the USB flash drive and rebooted. So far, so good. Then the reboot, and… there was a system crash, “unable to handle kernel paging request”, followed by a second crash report of something like “watchdog detected CPU stop”. This error repeated through about five restarts. Off to research. I looked for fixes to these, starting with the paging problem. There were dozens of potential solutions, mostly relating to hardware memory problems. I ran the Fedora memtest, and the memtest from System Rescue CD and Trinity Rescue Kit, not a single problem. The disk was OK. I spent about four continuous hours working on this problem, and never got a solution. After about three hours of looking, I started downloading Ubuntu Server. I ran the computer with the Live CD version of Fedora 22, and the live CD for System Rescue CD, and had no issues.

So the installation of Fedora 22 server was a failure, and wasted about six hours of my time. I was highly disappointed by this, I’ve been using various flavors of Fedora since Core 2 with no issues. I expected the installation to be smooth, and I had two major issues.

I mentioned I started downloading Ubuntu Server in the midst of this troubleshooting. After I reached my frustration point, I took a break to get dinner, then started installing Ubuntu. First, I moved it to my USB device. It booted, and got all the way to the second boot screen, and then it failed – Ubuntu was looking for the install packages on a DVD – only. By this time, I was not really confident. I burned the image to a DVD and started again. This time, it loaded and started up just fine after installation. I immediately downloaded a lot of tools that I regularly use using apt-get.

Now, the server has a couple of major functions:

  • NAT routing for the building computers to the Internet.
  • Content filter (Dansguardian) to keep our students out of unsavory sites.
  • Inbound IMAP/POP3 email and outbound SMTP email.
  • Webserver for both external and internal use.
  • Shared drive using SMB.
  • I was doing this work in the computer lab, since it was a lot more comfortable than the computer closet. I had snaked a couple long Ethernet cables from the closet for this. The system was in the process of downloading the tools I mentioned when the student computer next to me popped up a note saying a Java update was available. Just for the heck of it, I clicked OK, and it downloaded! The NAT routing had been automagically set up by the computer. Very cool. I ran a speed test, and got 16Mbps from the server, and 16Mbps from the student computer. So the building Internet service was up and running. I noticed a lot of activity from faculty machines downloading eamil from our .com email service provider.

    Next I got the internal email set up. I had put in a lot of work with our Fedora 2 server to keep our computer from being used as a spam relay or a proxy. The anti-relay adjustments for Fedora 2 involved a lot of whitelisting and conf file tweaks. The new Postfix implements anti-relay out of the box, and further enhances anti-relay and spoofing by requiring authentication for transmits as well as receives; I like that. Postfix also integrates anti-virus/malware and anti-spam as well.

    The rest of the configuration was pretty straightforward. I have not installed the new OwnCloud capability yet, and I have one issue with how the Squid proxy server is configured, but it should be fixed in the next day or so.

    I had been planning on replacing our Fedora 2 server for a long time, but having to do it under the pressure of a failure was not how I wanted to get it done :). And the Fedora 22 problem was very disappointing. Regardless, it’s working now, and that’s the bottom line.

    A Weird Server Problem

    11 October 2015

    At St. John’s, we’ve been limping along with a creaky set of infrastructure, workstations, and server for a long time. Long enough that the server was still running Fedora 2 (!), which is very old. But it worked, and I understood it, and could maintain it with little problem. Every once in a while a component would fail, and be replaced, and the software would get updated.

    Recently, there were a spate of problems that had to be addressed. I was in Muskogee a couple weeks ago, and Internet access for St. John’s was very bad. I worked with the server and Cox (our ISP) and we determined that the cable modem was having problems. We bought a new one (see related post) and Ian and I got it replaced, and all seemed well. We ran about 10,000 pings in flood mode and the link from the server to the modem was stable with no packet loss, so we went home. Until the next day…

    I was on the way to SLC, and the school called, and there was still a problem with Internet access. From Dallas, I pinged the modem gateway and the server, and all seemed fine. When I got to SLC, I remoted in and noted a LOT of lost packets, 30-40%. Ian went up there that evening and found another network card and installed it, ran a ping test, had no lost packets, and he went home. All was OK for a week or so.

    We started having a spate of problems; server reboots would cure them for a couple hours or more. Finally, last Thursday, it got to the point where we were suffering 80-90% packet loss. I started some serious troubleshooting that evening. First I connected my laptop directly to the cable modem and pinged the hell out of it, then started downloading huge files, no problems found, so it was certainly on the St. John’s server side. The network cable was OK as well I figured I had another bad/failing network card, so I replaced it with a donor card. Still lost packets after about 10 minutes of good operation. So I bought a new card (these are PCI). Same behavior. Now I figured I had a bad PCI slot, moved it to another, same behavior. WTH? So now I figured I had a failing motherboard. I had another machine that is a twin to the server, so I pulled the Fedora hard drive out and moved it to the other machine. It booted just fine, worked for network access, I thought we were in good shape, and then, after about 10 minutes, I started seeing lost packets again!

    The only thing in common with all these problems was the hard drive with Fedora.

    I still don’t know what the problem was. I wonder if there is some overflow related to the long time the server has been in service, or an intermittent disk error. That would have to be in the logging system, since packet movement is entirely in memory. I will look at that later, maybe.

    I’m in the process of rebuilding a new server, most of it is working, and I will finish the rest of it today. But that’s the subject of another post…

    An Odd Linux Install Problem

    9 October 2015

    I have been trying a couple Linux distros recently in anticipation of building a dual-boot disk for a new laptop computer I will be using a lot. I’ve been using a Fedora-developed tool to put the bootable ISO files on a 16GB USB drive.

    But three (so far) of the installs have made it past the BIOS phase, but stopped cold at the point of loading the Linux OS. In each case, the GRUB bootloader can’t find the default menu to load. In each case, the menu is there.

    The workaround is simple, but you have to have been using GRUB for a while to know what it is. Press the Tab key, and the loadable images will be listed. In most cases, the one you want is the default, which for Fedora is “linux”. The others I’ve seen are some variation, for example, “korora-2.xx-linux”. Type the name, and you are off to the races, er, installation.

    But I think that GRUB should tell the user that, or even better, look around a bit harder for the menu, which would allow pointing at the images that were available.

    FWIW, if you search for the error message using Google, you will find a lot of advice as to how you can use editors and other tools to fix the problem. You’ll also find some outlandish “solutions”. Just hit Tab.

    The Case Of The Non-Missing Files

    7 September 2015

    My phone was stolen by some scumbag a couple weeks ago.  She walked into St. John’s, and in the course of wandering around for 20+ minutes, she went into the computer lab and took my phone, then rummaged in my backpack and also stole my Nexus tablet and a mifi device.

    So I had to get a new phone, a Samsung S6.  It’s very nice.  I’ve been reconstituting stuff since I depend on my phone so much.  So thanks, “lady”.

    One thing I had not really messed with was the music I had on the phone.  I had 450+ tunes on the stolen phone.  Most of the songs were from CD rips I had done over the years.  I also had 107 songs I had captured from the Sirius feed on Dish Network.  Those I captured with Audacity, and were stored on my laptop.  So yesterday I connected my phone to my laptop via USB, and copied the files over.  Being curious as to how much memory they took up, I fired up Windows Explorer and browsed to the phone, did a Properties,   But, there were 163 files, not 107.  I started drilling down, and there was stuff I had before the theft, but I had not transferred over just then.  WTH?

    I had Apple iTunes on a previous desktop, and I had downloaded a number of songs.  I wasn’t happy with the results, as a significant number of the downloads were covers, even though the iTunes listing claimed to be the actual group.  So I didn’t keep that up.  I thought maybe the extra files were from there.  But I looked, and I had downloaded a total of 19 songs from iTunes, and that’s wasn’t enough to get to 163.  Also, the iTunes songs were in a protected DRM format, and wouldn’t play anyway.

    I did some searching for a a couple of the mystery files.  They were not on the hard drive of my laptop.  I fired up our big backup drive, and searched there.  A couple of the files were there, maybe 20, but that’s it.

    So I went ahead and ripped a bunch of the same CDs, and some others, and I know have 439 files taking up 1.5GB on the phone.

    I would imagine that the “extra” files were stored in a cloud somewhere and got restored.  But I don’t know that I have ever backed that music up.  That raises the question of why some music was backed up, and not all of it?  The S6 is pretty darn smart.

    07 September 2015, 1115 Update:

    Today we were all driving to Tulsa.  I remembered I had ordered a song on Google Play, for something like $1.49.  I had looked for the song on the S4 several times, and was surprised it wasn’t in the music library.  I looked on Google Play today, and after a little bit of rooting around, I found a reference to the song in the “purchased and free” section.  I also noticed that if you want the song on your local device, you have to click a “download to device” link.  After not finding the link, I found a further reference that said you have to be connected to wifi for that to work.  So that was interesting.

    But the really interesting thing was a list of songs that were most of (but not all) of the mysteriously non-missing songs I wonder about.  Apparently those were synced to the library on Google Play, and then when my new phone connected, they were synced down.  Useful, but inconsistent, a little unsettling, and kind of cool, all at the same time.

    Buying A Car The Easy Way, From Home Mostly

    6 September 2015

    Had an example of being hyper-connected today. We are very close to buying a new car for Raegan, and she has settled on the model and key performance, having visited several dealers to test drive a number of cars, and then downselecting. We are USAA members, and so she hopped on their website to check out their car buying service. In less than five minutes, she had a loan approval (for about twice as much as needed, my comment “Here’s enough rope to hang yourselves”). USAA sends a certificate via email that you hand to the dealer, and you drive off the lot. We did the same for Ian’s car, it was pretty simple.

    She had not even logged off the site, and in less than 15 minutes, we had a combination of *eight* texts, emails, and phone calls from dealers ranging from Ardmore to the metro area to Tulsa.

    All this without her ever talking to a human at USAA. Now, USAA had some good cars for her to evaluate, but she found the best match at a dealer in Tulsa via that dealers website, and she talked to them about the car, so we are headed there Monday.

    It was slightly annoying to get the flood of callers and texters (most of which repeated their contact attempts, some several times), but the response times were amazing, and I will tell you that this way to buy a car beats the heck out of the way we used to do it.

    Getting Upgraded to Windows 10

    4 September 2015

    I, like a lot of others, got an invitation to upgrade my W7Pro to W10 a couple months ago.  A week or so ago, I decided to let it start.

    My first surprise was that the Windows system check process.  It said it would take about 10 seconds, but it was still running 30 minutes, then an hour, later.  I killed the process, and restarted it, and this time it ran in a couple seconds, informed me that it had already downloaded the file (which I confirmed, an almost 3GB file hidden in the system32 folder).

    I let it start upgrading, and went off to do other things.  After a couple questions being answered, the computer went off to think for a while and flash the hard drive light.  It took about 30 minutes, and after a reboot, the machine…. came back up with a new login screen.

    I was able to log in, and the wifi connection worked.  Sound did not.  All of the apps seemed to work, except two I use for capturing audio and video.  Likely, related to the lack of sound.  All of the big apps (Office) did.

    I did some customization and liked what I saw.  I use a second monitor for most work, and I liked that the task bar was at the bottom of both monitors.

    I went without sound until about an hour ago.  Sound issues with W10 seem to be quite common.  One “fix” promoted by MS was to remove the Speakers object, but it didn’t work.  I ended up removing the sound subsystem using Control Panel, and when I restarted the machine, sound came right up.  I will see if my TV/audio capture device works when I get home.

    There were remarkably few oddities.  The Edge browser retained all of my IE bookmarks, but it did not import the certificates I use for accessing webmail.  After a couple uses where I could not open signed or encrypted emails, I noted an option to open the page with IE, and it came up and worked fine.  I had assumed that IE would be zorched as MS was replacing it with Edge, but not so much.  So I’m good there.

    So far, I’ve only seem one app crash.  That was Explorer, and it crashed when connected my Galaxy S6 to transfer photos.  That was more than a week ago, no problems since.

    I loaded a DOS app (well, “app” is a strong word 🙂 ), and it ran fine.  Windows automagically downloaded a helper (probably a VM) to run the DOS app in, and it has been solid.  The program crashed when I asked it to run a system configuration report, but that’s not unexpected.  I would imagine the app, written in 1997 or 1998, probably does not interface with Windows HAL…

    So far, so good.  My laptop has dual 2.5GHz processors with 4GB of RAM, and it seems to run just fine in that space.

    Later On Friday Update:

    I got the sound working on the machine.  I went into Device Manager and uninstalled the sound card, restarted the machine, and now good to go.

    I also got my TV/audio capture device working.  It’s a Pinnacle PCTV 100e.  I usually use it to capture audio from my Dish Network for later replay, and occasionally recording TV programs for time shifting.  Under Win7, I couldn’t view it at all unless I fired up the Pinnacle TV Center app, let it display whatever the TV was showing, then I could close the TV Center app and fire up Audacity for audio capture.  The TV Center software crashed several times, but an updated VLC player would open the PCTV device and show video and audio just fine under W10.  Audacity works as well.

    So I think I’m in good shape Win 10 wise.  We got Raegans Dell laptop upgraded as well.  I had issues with it doing the upgrade automagically from Microsoft, but I manually downloaded the update last night, and ran it this evening, with no apparent issues.

    Fallout From The Ashley Madison Breach

    20 August 2015

    This hack compromised a bunch of people.  It’s a little different from previous compromises/breaches.  The impact to people for most of those previous efforts was largely in the PITA category; sometimes people needed to lock their credit, or work with banks to recover stolen money.

    This is a bit different in that the results of the breach could affect marriages and relationships in a very direct manner that undermines what trust those relationships have.

    Supposedly the reason for the breach was disapproval of the site by the breachers.  That would be, I think, the first large-scale breach done in the name of morality, as opposed to ideology or financial gain.

    It has not taken long for sites to be set up to search the purloined data.  The media, ever a sucker for a quick sex-related story, has breathlessly reported a surge in calls to attorneys.  Who knows if that is true, but I suspect the potential is there.

    I heard a panelist on a program state that if people wanted to visit sites like Ashley Madison, they should buy a throwaway phone, get a throwaway email, and the like.  I don’t know how many types of site like A-M are out there, but there are thousands of porn sites at least.  I would suspect they are targets of attacks like this as well.

    Adventures In Photo Printing

    29 April 2015

    My Wood Badge Patrol wanted to give our Troop Guide something to commemorate her great guidance during our course.  We decided to give her a signed print of the six of us.  So, I needed to have a good print of a picture taken as a selfie using a Galaxy S4.

    I learned a couple new GIMP skills here, by figuring out how to turn the background on an image transparent.  I used three of them to decorate the photos.

    I’m in the Boston, MA area.  I looked up FedEx Kinkos, since I know they do prints, the nearest is 10+ miles away.  At dinner, I remembered that Walgreens and CVS also do prints, and happily enough there is one of each about two blocks from my hotel.

    I started off at the Walgreens.  It had two kiosks that had a variety of slots for various memory cards.  I had brought a USB cable that would let me plug my S4 in.  One of the two kiosks wouldn’t read the phone even after multiple tries.  The other side, and the kiosk went off and read every photo on the drive.  Note to kiosk developers:  add some logic to let the user select, say, photos from a certain day.

    Regardless, I selected the appropriate picture.  Now this photo was a JPEG that was 1920×1080, so it is a 4:3 aspect ratio.  I wanted it printed at 5×7, but the kiosk auto-cropped the picture, cutting out two of the guys.  I tried a number of sizes, but auto-crop always kicked in.  I couldn’t turn the auto-crop off.  I even tried printing it on an 8×10 piece of paper, but again, it cropped.  So I left Walgreens and headed across the street to CVS.

    CVS had two kiosks of a different brand (Kodak).  I never could get either kiosk to read from the USB cable connection.  These kiosks also had the ability to transfer files via WiFi, *if* you installed a smartphone app.  The left-side kiosk wouldn’t connect via WiFi, but the right-side device connected right up.

    This was pretty cool.  The device changes the WiFi SSID for each transfer, and encodes the SSID and a password in a QSR code.  You select the picture(s) on the phone, then use the QSR reader in the app to grab the WiFi settings, then the file is transferred in less than a second.

    The options for printing were far greater on this kiosk.  You could size the photo to 4×6 or 5×7, but they also had an option for 6×9 that worked well.  I selected it, they tried to sell me some extra stuff I didn’t need, and then it printed the picture on an attached printer automagically.  It looks pretty good, and cost me $2.

    It dd take some futzing to get a working kiosk.  I was disappointed in the lack of options for the Walgreens unit.  The CVS units were pretty cool.  I’m a little concerned with the reliability, only one of four units worked.

    Google Maps Coolness

    11 March 2015

    One very nice integration that Google Maps provides (and Google provides in general). I was looking for a restaurant using Google Maps using my desktop computer.

    When I pulled out my phone and fired up Google Maps, that restaurant was the first list. Click on directions, and I’m on my way. Very fast and useful. Latency was less than 10 seconds.

    FCC Decision on Net Neutrality The Right Decision

    27 February 2015

    The FCC requested comments from the public on the concept of network neutrality. I was interested enough in this that I submitted two sets of comments (I was one of reportedly several million commenters). I am in favor of network neutrality.

    Since the FCC decision yesterday that supports the concept of network neutrality (NN), there has been two basic classes of reaction. Pro-NN people were saying it was a victory for ordinary people and most business, and anti-NN people we thundering that it was government control of the Internet and would cost business millions, and stifle innovation.

    You can separate “the Internet” into a couple segments. One segment is the backbones of the net, which consists essentially of a set of very high capacity network connections that run between major hubs, and typically radiate out from major hubs to smaller hubs with a set of high capacity network connections, and from the small hubs to even smaller hubs, eventually terminating at houses and businesses. I say “backbones”, because each of the Internet service providers (ISPs) have their own backbone. There are interconnect points between the backbones so that each house or business doesn’t have to contract with every ISP to be able to reach every other house or business.

    ISPs sell access to houses and businesses, and they have every right to charge different amounts depending on how much data you want to pay for. A customer who wants to fire up their computer each night, read some news, and check email, clearly uses less bandwidth than Google, and so pays less. That is not the issue with NN.

    Say Google contracts for an OC-3 connection via AT&T. They pay money to AT&T for that bandwidth. But while some of that traffic goes to and from AT&T to other AT&T customers, some of it also goes to Cox Cable customers, and it is a lot of traffic. Under NN, Cox has to carry that traffic regardless, and without impeding it.

    But what the ISPs wanted was to eliminate the concept of NN. In this example, Cox wanted to charge Google for that traffic that originated on the AT&T network, or be able to throttle Google traffic down to a smaller amount of bandwidth. The claim is that it is for cost recovery. But in reality, Cox has to keep its backbone large enough to satisfy all of it’s customers, and they surely have their own high-traffic customer (say, Bing), and some of that Bing traffic goes over to AT&T, who wanted to charge Bing a premium. It’s really a scheme to charge twice for some traffic while paying once for the infrastructure.

    This doesn’t cost ISPs any more. And it sure does not stifle innovation. Think on this: Google came up with a nifty search scheme, and millions use it. To keep those users happy, Google pays AT&T for more and more bandwidth, and so pays for that extra traffic. Any other company that comes up with a good idea can do the same, and the ISPs will be paid to give the extra access.

    And the argument of “government regulation” of the Internet is just bogus. The FCC issuing rules that guarantee NN has NOTHING to do with government regulation of the Internet. As a side note, it’s ridiculous for any Member of Congress to complain that an the FCC NN ruling is regulation of the Internet, and at the same time support NSA or the police capturing and storing Internet traffic from people who are not suspects in any crime (warrantless wiretaps, data vacuuming).

    So the FCC is actually putting a stop to ISPs being able to double-bill some big bandwidth users. It’s a good decision.

    Urbanspoon Searches Get Better, Yea!

    19 October 2014

    I’ve been using Urbanspoon forever to find beta on restaurants, and I’ve been doing it enough that I’m a Prime member.

    The biggest problem with Urbanspoon has been the search function. I want to be able to find a restaurant in a town or neighborhood, by cuisine.

    I noticed a couple days ago the search bar at the top of each page was different. It works pretty well! I put the name of a restaurant in the appropriate text field, and got suggestions by name. I put the city in the other side, clicked search, and boom there was what I was looking for.

    Much better! I would like to suggest a map-based search next. Show me a map of a specified area (e.g. 39th and Tulsa, Oklahoma City), and then show me restaurants in the surrounding couple miles.

    It’s getting there!

    One Reason Computers Are So Inexpensive…

    17 October 2014

    One of our St. John’s faculty computers was having a hard time accessing Web pages.  It was intermittent.  I fired up a command prompt and pinged the server, and the result was a loss of one of the four packets.   I set up 1,000 pings,  and had 31% packet loss.  Just to be sure, I moved the network cable to an open port on the switch and repeated the kiloping; this time it was 43% packet loss. My final test was to connect my laptop to the cable, no packet loss.  It was pretty clear I had the NIC in the machine going bad.  This machine is a less – than – a – year – old HP.

    I decided to open the machine up and see if it had a PCI or PCI Express slot that I could put a replacement NIC in. I had a minor surprise: the computer had no slots at all for cards.  The case has knockouts, but nothing on the (very small) mobo.

    So this machine has numerous USB ports open, and I have my choice of USB Wifi or USB-to-RJ45 connections,  so I will be able to fix the problem. 

    It lead me to wonder about another machine I have here, an inexpensive Dell.  I pulled the cover off, and sure enough, no card slots.  I’m sure it’s a trend.

    For a NIC, it’s not an issue,  but the 2nd highest failure item I have is video cards.  Those are not really available and reliable in USB, so I may not have a good replacement option there.

    Why Can’t Windows Just Get Along?

    6 October 2014

    Subtitled:  Windows 8 Cost Me Five Hours of Time For A Simple Task

    I spent a largish part of this weekend at St. John’s taking care of a lot of stuff that has built up a backlog.  Most of it was straightforward:  I got all of the lab computers up to snuff (except one that has a video card slowly failing, and another with a balky network card, which I managed to leave at home so I couldn’t install it), and ran a stress test on the lab network.  Ian got the computers in Raegans room connecting and working, and I re-crimped a new network connection for them.

    I also worked to get the (previously four) five computers in the 2nd/3rd grade room back on the school network.  This was a significant untangling job, but straightforward.  Those four computers have shared a laser printer (an HP 2100TN) using XP printer sharing for a long time.  Due to where the computers are now physically located, I changed the printer server computer from one to another, printed a test page, and then went to the other three XP machines to connect them to the new printer server, and delete the old one.  I also replaced the five-port 100Mbps switch (four computers and the building network connection) with an eight-port, since I had two additional computers (the W8 box and one other).

    I’ve said in the past that shared printing is one of the things Microsoft got 100% right, and pretty darn easy, since the days of Windows 1995.  All of the computers are in the same subnet, and in the same workground, and all can ping each other, so no problem.

    Next I went over to the new computer, a Windows 8 box.  So it’s a new UI.  Whoever came up with it, and deletion of the Start button and menus, ought to be tossed out of the profession of software development.

    Here’s an example.  I get the move-the-mouse-to-the-upper-right-and-swipe-down to get a menu (well, I know to do it, I do not see the utility).  So I want to change printer settings as in add a printer.  On XP, click Start, Printers, and you get a dialog that includes a button Add A New Printer.  Windows 8?  Do the odd swipe thing, then click Settings (this makes sense).  You get Devices, including Print.  Sounds reasonable.  Click that Print, and you get this:

    WTF?Just what in the hell does this mean?  You can’t right-click or click anything except the left arrow back button.  FOUR USELESS CLICKS.

    I finally found the add a printer dialog by clicking the faux-Start button on the lower left, and then typing on the odd tile screen P-R-I-N-T, and eventually Windows 8 showed a link to Printers.  I clicked it and got a fairly standard Printers and Devices that included an Add Printers dialog.

    OK, now we are getting somewhere.  I got to the shared printer box, but Windows could not see the XP box.  I found a place to set the Workgroup on the System menu, and rebooted the machine.  It refused to see the computer that was sharing the printer.  Couldn’t see any of the other four computers in the workgroup either.  Wouldn’t take a direct entry in the form \\server\printername either.  I mucked around with all this for more than an hour.  Did some reading about W8 and sharing, and found that Microsoft was really interested in having people set up Homegroups.  Well, Microsoft, bite me.  I ended up giving up for the evening.

    The next day, I brought everything up from scratch and tried again.  Still no luck.  I was pretty frustrated at this point.  I could sort of understand having issues connecting XP boxes to a W8 printers (understand, but not agree).  But the allegedly more advanced W8 should talk to XP flawlessly.

    So after a lot of reading, I came to the conclusion that the W8 connectivity problem was not solvable.  The printer had a JetDirect card, so I decided to connect everything up that way.

    I connected an RJ-45 cable from the printer to the switch, and had the printer dump a status page.  The printer had a static IP address set up in the 10. range.  I changed one of the XP boxes to a static IP in the 10. range.  I could ping the JetDirect.  I tried to hit the JetDirect via a web browser; no luck.  I did a telnet to it, and got asked for a password.  A null password didn’t work, nor did admin, or a couple others.

    Off I went to the web to find out how to reset the password on a JetDirect card in a LaserJet 2100TN.  I had to look through dozens of pages that had some variety of powering the printer on or off, while pressing the GO and CANCEL buttons.  To save anyone else from having to do this, here is what to do:

    1.  Do not trust anything related to the 2100TN on the HP website.

    2.  To clear the password on the JetDirect card in a 2100TN, power the printer OFF.

    3.  Hold down the Cancel Jobs button.  It’s the smaller one.  Power the printer ON.

    4.  Wait 30 seconds.  Release the Cancel Jobs button.  Wait about 3 minutes.  The JetDirect should be cleared, and the default is to get an address via DHCP.  You can verify this by holding down the GO button and then pressing the Cancel Jobs button to get a printer status page printed.

    Now, if you have a DHCP server, the JetDirect should have received an address in the same range as your computer.  If not, you will need to change your computer IP address to be in the same subnet as the JetDirect.

    The web server still didn’t work.  I did a telnet to the JetDirect.  It does NOT support the ECHO ON command, so you will be typing in the dark.  I used the DHCP-CONFIG: 0 to disable DHCP, and IP: address to set a static IP address so my DHCP computers could find the printer.  Now I was able to ping the printer from all the workstations.

    Next I set all four XP machines to print to the newly IP-enabled printer, and deleted the shared printers.  That was easy.

    I went to the W8 machine.  It could ping the printer, so that was a good sign.  I started the Add Printer dialog, and almost immediately got this:

    20141005_151551

    Yay, looking good so far.  So, I thought, this will be easy.  I clicked next, and got a dialog that didn’t have that particular printer, but it offered to go off to Windows Update and find it.  I clicked OK, and W8 went off and thought a bit.  It came back with this:

    20141005_151610

    So, W8 went off and downloaded drivers for approximately 10,000 printers.  Well, maybe not that many.  But, I would have expected W8 to at least download the driver for the printer that it had already identified.  Or at least pointed at the printer in the selection lists.

    Regardless, I pointed at HP and then found the printer in the list of printers, and the driver installed and the printer was printed to.

    But this was way hard.  I’m good at this stuff, but while it’s been pretty easy to share printers since W95, and nearly trivial since W98, turned into a heck of a lot of wasted time for me due to W8 being way too difficult in talking to ANOTHER Microsoft operating system.

    I’ve not blogged about it, but two weeks ago I had a non-trivial time getting a W7 box to reach out to another W7 box.  This leads me to believe that MS wants to junk workgroups in favor of homegroups.  If that’s the case, it’s quite user antagonistic.

    Notes on Using A TRENDNet IP Camera

    22 August 2014

    I bought a TRENDNet TV-IP551W camera back in January from Newegg.  I was looking for something else, and the camera came up as a special for (IIRC) $14, with free shipping.  I have wanted to play with one for a while, so I bought it.  When it got here, configuring it was trivial.  I had a picture coming out of it in about three minutes flat.  I attached the camera to the outside of the house near an outlet that is tied to the outside lights; I use it to power holiday lights.  The camera only came on after dark, but it wasn’t being used operationally, so I didn’t sweat it.  Every once in a while I would connect to it remotely and see what was going on with the driveway.

    The only issue here has to do with motion.  Any device (including my Android tablet and phone) could see a still picture grab, and refresh it manually.  BUT, you need either ActiveX support or Java to see motion video.  ActiveX means Windows.  I think I tried to get Java on my tablet, but gave up after a try because it didn’t matter at the time.

    The camera has been hanging outside since, and worked.

    Last weekend I decided to put it to operational use.  I moved it to a better location, and set it up to perform motion capture.

    The first thing was the motion capture.  I tried using my Windows 7 laptop to define exactly where on the screen the motion capture areas were.  I pointed my W7 IE browser at the camera, selected Administration, then Configuration, and finally Motion Detection.  The camera wanted to download an ActiveX control.  No problem, but every time I tried, Windows would block installation of the control.  I set security essentially to off, still wouldn’t.  For the heck of it, I used Raegans computer (which is XP), it worked fine with the camera.  Hmph.  I left the motion sensitivity at the default of 90 (scale 1 to 100).

    With motion capture set up, I went after email.  The camera will send you an email when it detects motion.The camera wants to know an SMTP server, so I pointed it at our upstream Cox SMTP server.  A test message went out just fine.   So far so good.

    When the camera detects motion, it will capture the motion and upload the imagery.  Sounds cool.  It wants to upload the data to an FTP server.  Most people don’t have one of those, but I have several!  So I fired up Filezilla on Raegans computer, created a user name and password for the camera, and a folder to store the video.  Then I went back to the camera and plugged the information in, and send Ian out to trigger the camera.  I almost immediately saw activity on the ftp server, but no files uploaded.  Hmmm….

    Much experimentation ensued.  I should have installed Ethereal (Wireshark) to her computer, but I played with settings fruitlessly for a while.  Finally, I did the user creation on the St. John’s server, then pointed the camera there.  Then I did a remote desktop to St. J, fired up Wireshark, and watched the packets flow in.

    I had set the camera up to dump stuff to /home/drivewaycam.  On Wireshark, I could see the username and password (FTP sends in the clear), and then a CWD drivewaycam.   OK, now we were OK.  I stopped the camera, then created a subdirectory called drivewaycam (making tree /home/drivewaycam/drivewaycam), did a chown on the directory (chown drivewaycam:drivewaycam drivewaycam; type that fast!), and restarted the camera.

    Wireshark now showed file transfer, and a check of the new directory showed jpgs.  So it was working.  I went off and did some other stuff for a while.

    Right before I went to bed, I got my phone, and… there were well over 1,000 NEW messages from the camera!  To make the story short, it turns out the camera doesn’t send an email every time it decides to do a motion capture.  It also doesn’t upload a video.  It uploads 1-second images to the server, and sends an email message every time it does it!

    I quickly went in a turned off the email feature.  I also turned the sensitivity down to 50%, which should reduce false triggers.

    But… the camera has captured the mail and package delivery people, and Erin coming home from school.  So it is working.

    I would like to get an email when motion is detected, and have sent off a feature request to TRENDNet.  I use the Unix/Linux standard ImageMagick convert tool to batchconvert each set of jpgs to an mpeg video; I will likely set that to be done in a cron job at some point.

    I wonder if the camera could output IP video in H.264 or as an mpeg stream.  That could be read directly from Windows Media, VLC, or most any other open source tool.

    Regardless, the IP camera is pretty darn cool.  I am going to get another one at some point, and I will likely make it a “see in the dark” camera.

    Update:

    As I have reviewed the captured images, I was seeing something amusing:  people or cars on my driveway would appear as if by magic!  The cars would be about halfway down the drive, and people about a quarter of the way down.  This was with the camera set to 50% sensitivity.  I changed that to 75%, and now I see things moving a lot farther away.  The obvious downside:  about twice as many images.  Glad I turned off the email notification.

    I’ve had this camera uploading to the server I run for St. John’s for over a month.  I downloaded some image tools to check out, and settled on using the open source mplayer package, which has Linux and Windows versions.  So the process was to fire up my SSL and Secure FTP clients, point them at the St. John’s server, and do an mget -r to pull the individual image files to my laptop (this means pulling in roughly 500MB of stuff every day, so I’ve glad I’ve got a bitchin’ good network connection at home).  The camera stores the files by date, and hour.  So I would have a directory named 20140908 for 08 September, and then subdirectories like 0400 for the 4:00AM captures.

    I wrote a batch file that works from the DOS prompt.  I manually CD into the day directory (20140908), then run the batch file.  It checks to see if there is directory for a particular hour, and if there is, it dives down there, and runs the mplayer mencoder tool against any JPEG files in that directory.  The mencoder combines those JPEGs to make an MPEG movie, which is a lot easier to review.  I have VLC on all my machines, and it plays the MPEG just fine.  Note, the Microsoft Media Player SHOULD also play these, but it can’t, and whines about it.  Curiously, the Microsoft Media Encoder plays the MPEGs just fine.  Mplayer, unsurprisingly, does fine as well.

    An hour of captures typically produces a movie that runs from between 15 sec to 45 sec.  On a windy day, the number of files are higher as the trees in my yard move around a lot, which triggers the motion detection.  I can review the entire day in about five minutes, which isn’t too bad.  It’s kind of cool to occasionally see what are effectively are time lapses as shadows of trees above the driveway move as the sun moves.

    The last thing I’ve done is to move the image capture to a local computer.  The images were being uploaded from the camera to St. John’s, then I was downloading them to my laptop.  That’s a long ways to go, and a lot of bandwidth.  I have an extra computer at the house that we don’t use; it has a wifi connection and it’s a Windows 7 box.  I cleaned all the stuff off that wasn’t needed anymore, and installed the free FileZilla ftp server on it.  I created the same user and password that the camera uses for the St. John’s server, and the same c:\drivewaycam\drivewaycam directories.  I got the computer on the house network, and set the router to always provide the same IP address to the computer.  So far, so good.  Next, I changed the ftp upload address from St. John’s to the computer in the house, sent Ian out to walk in front of the camera, and watched the images start rolling in.

    I uploaded mplayer to the computer, and the enccoder.bat file, and ran them manually, worked fine.  The last thing to do was put RealVNC on the computer.  Then I shut it down, moved the computer to an out of the way location where it still had good wifi access, and powered it up again.  Now, it sits there and does nothing except capture images.  When I get home in the evening, at some point, I use my laptop to remote access the computer, and run the encoder.bat against the directory of the day, and then scan the images from my laptop.  It works really well.

    I will likely upgrade the batch file  at some point and combine it with the Windows version of cron to have it scan for new files a couple times a day, and then automagically convert them to MPEGs, and maybe even email the MPEG to me after creation (each hourly MPEG is typically only a couple hundred K, not bad for emailing).

    So this is working fine so far.  I am going to get a night vision version of the camera next.

    I will say that while all of this ftp stuff was easy, it was easy for me, who has been doing this sort of IT work for literally decades.  I think it is too difficult for the average user.  The camera is a cool piece of technology, and seems to be essentially a Linux device with a camera input.  There is NO reason the camera should not be able to directly capture video from the camera.  Even if TrendNet wants to continue doing the JPEG capture, there is no reason to not do something like I do and run mencoder to convert the capture to an MPEG, and then upload that file (it would sure be faster to upload a single 200KB file instead of several hundred 200KB files), or even just email it.  I doubt that anyone who buys these cameras is sitting there watching it all the time, and the motion capture function is the best feature for determining when something happens, but generally after the fact.  I hope that the TrendNet people read the email I sent to them and thing about how their users need to use the camera.

    An Edubuntu Installation for St. John’s

    6 August 2014

    This was really way too easy…

    The school computer lab (and also the rest of the student computers in the classrooms, and to a lesser extent our faculty machines) is being impacted by the end of life for Windows XP. Most of the machines are XP Pro, some XP Home. There are a couple of W7 machines as well. The machines are all pretty old (most are 2004 vintage), and are increasingly having issues of one kind or the other. I installed W7 on one of them, and it craaaaaaawwwwled, even after I dumped an extra couple GB of memory in it. I’ve been spending an increasing amount of time keeping the things updated, and even with the remote access tools I have been deploying the past couple years, I’d still have to go around to 30 or 40 machines for some things.

    The machines also got “lab rash” from kids playing with settings they were able to, and would occasionally jack a machine up by inverting the display or whatever, so I would end up going by to fix it. And that’s even with Raegan being very swift on fixing stuff.

    So I started looking at alternatives, and decided that Edubuntu was the best candidate. It had all of the existing software that we currently use, and a lot more. It has thin-client capability, so that would end kids jacking with the machines.

    Edubuntu needs a fairly beefy server. I had a donated machine from an oil company that sported a 3.8 GHz Xeon and no less than four 146GB SCSIs moving along at 320MB/s (with space in there for a second processor if I can find one cheap). It has dual power supplies and enough fans to build a drone, and *two* GB Ethernets. It is, BTW, also fairly old, having been introduced in 2004. That explains why it has two USBs, *and* PS/2 connectors. The machine came with the six memory slots filled with 1GB DDR2 ECC memory sticks. I happily filled the six slots with 2GB DDR-2 memory, and the poor machine squawked at me until I turned it off. Turns out it ONLY wants ECC memory, and the memory I had was non-ECC. Oh well.

    For 20 machines, Edubuntu recommended 20GB of disk (not a problem there) and 4GB of memory for every 20 clients. So 6GB is comfortable (especially since I am planning on running the browsers locally). I will probably haunt eBay and get at least a couple more sticks of ECC memory also.

    I drew up a couple iterations of how I would deploy the thing on the school network, decided it would work, and started the process. I have a couple weeks until school starts. I knew the good news was, since I was deploying thin client network-boot clients, that I wouldn’t have to change the lab workstations at all, except to enable net booting, and so I could fall back to the XP workstations at any time.

    So I download the latest Edubuntu, popped the DVD into the machine, and started the installation process. All was smooth until I got to the part where you identify the disk to install Edubuntu on.

    Now, when the server was donated, I had wiped it for the company that donated it, and then dropped Fedora 16 on to it to play with. That all worked fine.

    So when Edubuntu got to the Installation Type page, it asked if I wanted to use /dev/sda, and that there was Fedora on it, and if I used the whole disk then the Fedora would be wiped. That didn’t bother me, so I selected it, and the Use LVM option, and told it to Continue. I got the Erase Disk and Install Edubuntu page, verified that /dev/sda would be used, and clicked Install Now. The Install button greyed out (only one shade of grey), the page title changed to Installation Type after about 10 seconds, and the Install Now came active again. Hmmm… Clicking Install Now again takes you back to the actual Installation Type page (with use entire disk and use LVM). This cycle repeated (eight times I tried it).

    So off I went to research. The existing Fedora would still boot. I installed Edubuntu on another computer to show the media was OK. I posted a query to the Ubuntu Forums. I kept coming back to the existing Fedora installation. I’ve done a lot of installs of a lot of OSs, and most of them would happily overwrite an existing OS, so I was skeptical that was the problem. In fact, Edubuntu happily overwrote a Linux installation that was on the workstation I used to show the media was OK. But the existing Fedora on the server was LVM, which is a technology not fully supported by some Linux tools (like gparted). I hadn’t any suggestions from the Ubuntu Forum (which surprised me).

    So I decided to zorch the Fedora LVM installation. It wasn’t entirely straightforward; LVM is not as well documented as it could be, there is a wealth of similar-looking beta, with some slightly contradictory. Here is what I ended up doing:

    • I booted the server using a Fedora Live CD (it was the Security Spin for Fedora 15).
    • Used lvdiskscan to identify the LVM. It had four PVs, and three what I would have called mount points: root, home, and swap.
    • Used lvremove for the root and home partitions. When I tried to remove swap, it complained that swap was active (!). This is probably what confused the Edubuntu installer. I used swap off -v to turn off the swap mount point, then was able to use lvremove to remove it.
    • Used vgremove to take out the volume group.
    • Used pvremove for the four PVs.

    I rebooted the machine and replaced the Fedora CD with the Edubuntu DVD. Installed without a hitch.

    The key to the LVM removal was to take out the mount points first.

    So clearly there is a buglet in the Edubuntu installer that does not like existing LVMs, or perhaps does not like swap partitions in particular.

    Once the server rebooted, I connected a switch to the LTSP port, and then a Dell workstation to the switch, started it, and switched it to network boot in the BIOS. It still booted from the disk. I restarted it, went back into BIOS to disable the disk, and on reboot it came up over the network, and I had my thin client running.

    One more glitch: the Fedora I installed to play with automagically added all four disks (PVs) to the LV with no prompting from me, so instead of installing to a 146GB disk, I had a 550GB+ logical disk. The Edubuntu install put LVM on, but only with the single disk. I will manually add those using pvcreate and then lvextend this evening or tomorrow, but it is one more thing to do.

    I’ve a long list of stuff to do. I need to add a student user, add some software, enable local access to USBs, add access to our St. John’s shared disk, and get the remote management tools working. I will get a graduate level course in rebuilding the client image as this goes along.

    I’m also interested in hauling the server to school and plugging it into the lab network, and watching all those machines boot up simultaneously.

    But the really amazing this is how slick it was with the Edubuntu DVD. There is a heck of a lot of capability there.

    06 August 2014, 2200 Update:

    I added two of the three other disks to the LVM installation, using a set of excellent instructions at http://www.rootusers.com/how-to-increase-the-size-of-a-linux-lvm-by-adding-a-new-disk/. So now I have a 410GB space to play in. I didn’t put the fourth disk in as the SMART disk function was reporting a future failure.

    17 August 2014, 1239 Update:

    I’m building a second server for the school to host a student management system, a local storage cloud, and WordPress for the teachers.  It’s also Ubuntu based, also 14.04, and had a disk in it with a Fedora 10 LVM instantiation.  This installation went right over the Fedora 10 with no comment. 

    BTW, Unity on this one is sloooooooow.  I installed FXCE, which I am used to from a number of Live CDs I use, and it runs darn fast.

    An Interesting Galaxy S4 Feature

    24 July 2014

    Hmmm, something kind of cool and frightening at the same time. I just dialed into a telecon that was in the calendar of my Galaxy S4. The calendar detected the phone number of the teleconference, and I only had to tap it to dial it. That’s pretty standard, my Blackberry would do that five year ago. But the message had this text:

    Telecom: 888-283-xxxx
    Password: 7958649

    After the dialing was in progress, the S4 popped up a dialog with “Do you want to send 7958649 as tones?”. It startled me enough that I pressed No, then after a couple seconds, went back to the message, and it repeated, at which point I pressed Yes, and it sent the tones and got me into the telecon. The only thing it didn’t do was send the trailing “#” that is the end-of-numbers token. I did that manually.

    That’s pretty darn smart of the software to realize that most meet-me numbers require an access code, find those in the message, and offer to send them. As I do with many of these, I had jumped back to the message to get the access code, and I spoke the code to help me remember it when I jumped back to the phone page. Maybe the spoken numbers were the key. Regardless, I’m going to play with that (when I get some time). It’s a very cool feature; I wonder what else the darn thing does. Raegan already thinks the phones are smarter than us.

    Another Victory for Personal Privacy

    25 June 2014

    I was glad to see the SCOTUS bar searches of cell phones by police. I was amazed that it was a 9-0 vote.

    The police/NSA/FBI surveillance programs are antithetical to our freedom in this country. Our jurisprudence is based on the concept of innocent until proven guilty, and the burden of that proof of guilty is on the state, not the individual.

    There are too many instances of a traffic stop resulting in the wide ranging search of an individuals possessions (you see this on the highway constantly), with little accountability for the police doing the searching. For every cited case of a drug dealer being found this way, I would guess that there are many, many more cases where nothing is found. That would be information the police would not want to have publicly known.

    I do understand that the police would need to check to make sure that they are safe during these stops (although in the vast majority of stops, the cops are the ONLY ONES with guns), but rooting around in a persons wallet or their phone does nothing to advance that safety argument.

    The police need to do their jobs the way they were intended to: if someone is suspicious, start an investigation, get warrants, and find evidence.

    This is related to another story yesterday about a SWAT team raiding a house (IIRC, the major crime being looked at was a nephew of the house occupants was suspected of having what the story described as a small amount of drugs). The SWAT team came in with automatic weapons, and a flash-bang grenade ended up in a crib, critically injuring an infant. The nephew was not even there. Overwhelming deadly force, and completely no intelligence (and I use this for both the cops knowing where the nephew was, and their general brains), were a terrible mix here. The injury to the baby was far out of proportion to the supposed crime here. The increasing militarization of the police just feeds on the worst fears of government, and will increase the reaction of those who already fear some sort of police state.

    Power Supply Frankenputering

    30 May 2014

    We had a power fail at the house this morning, probably a squirrel somewhere he should not have been. It’s an easy fix for OG&E, just push the big breaker bar back into position. We were down for about three hours.

    Everything in the house came up OK, except for the Dell desktop Raegan uses. I pulled it out and opened it up, the power supply was clearly fried. I went looking here in the house for power supply (I keep a few spares for school computers that fail), and the first thing I noticed was the ATX power connection was… too… short. By four pins.

    Clearly, someone upgraded when I wasn’t looking.

    I went online and found that the ATX 24-pin connector was the new standard. Hmmmm. I knew that none of the school computers supported that. Then I cast an eye on a computer a work buddy of mine had donated a couple weeks ago, that I hadn’t taken to school yet; it looked kinda new. I opened it up; the first thing I noticed was the hard drive was missing. The second thing was that it indeed had a 24-pin ATX power connector. The power supply, though, had an odd 20-pin connector plug with a four-pin outrigger plug; two physically separate connectors.

    I pulled the ATX-24 spec sheet, and then fired that sucker up in bench mode and checked all the leads with my meter; they were right in line with what the spec sheet said they were supposed to be. I got it into her computer, and it powered up to the BIOS screen just fine. So far, so good.

    Except… (there’s always one of those). Her drives (a 1TB and a 500GB) are SATA drives. The SATA power cables could connect to one drive, but didn’t have the length to connect to both.

    I went back to the dead power supply and cut the SATA connectors from it, and the grafted that into one of the cable bundles in the new power supply. A little solder, a little electrical tape, some strain relief, and she was back in business. The grafted cable went to the two SATA drives, and the one of the other cables that had a PATA power connector went to the DVD drive.

    It all powered up and is working fine now. It could have been worse.

    Changing Phones From an S3 to an S4

    19 May 2014

    I started carrying a Samsung S3 in January 2013 after a long time with Blackberrys. I was very impressed by the S3.

    A couple weeks ago, I noticed that I was having some issues with the phone charging. Some troubleshooting led to me determining that the phone (1) was having physical connection issues with the micro USB connection, and (2) the charging circuit in the phone was degrading. I found that a repair to the USB connection was a minimum of $65, and was no guarantee that the problem wouldn’t recur.

    Over a couple days, the situation took a bad turn. The phone just couldn’t stay connected to the charger, and worse, the phone would put no more than 70%, then 50%, then 40%, then somewhere between 1-20% charge into the battery, even being turned off and left on the charger overnight.

    So today I went to AT&T and got a new phone. I had looked at the differences between the S4 and S5, and the two sensors and better camera wasn’t worth the extra cost. It seems most of the stuff on the phone transferred over (pictures and the like) just fine. Some things, like messages and apps, didn’t, so I made a list of those for downloading later.

    I also got a new Otter belt holster for the phone, and for the sum of $1 got a neat device that shares my data via WiFi, so I can run my tablet or laptop without using the phone as a hot spot (which works well, but sucks battery down). AT&T took my old phone in trade (wiping it first), and the whole transaction was no money down.

    So after all this was done (by a VERY competent and professional young man in the AT&T store on west Maple in Omaha), I headed out and to a Target a mile or so east to look for some stuff for Raegan. My phone fell right out of the belt holster as I was waling in, and it got RUN OVER by an SUV. Many, many, many bad words went through my head watching. I got the phone, and there was not a bit of significant damage! There were a couple minor dings on the back side of the case, which was on the ground. There were TIRE TRACK MARKS on the display side. That display was protected by a clear plastic shield which protected the display. Simply amazing.

    One related item. Since my phone was in such a world of hurt, I downloaded an AT&T messaging app onto my tablet, which promised to send and receive text messages. It loaded up my contacts (from Google, I would imaging), and I was able to send text messages back and forth to Raegan (I had already set the tablet up to Skype with her in lieu of using the very fragile phone. The ability to text isn’t the thing, though. When I fired up the app on the tablet, the tablet downloaded not only my contacts list, but HUNDREDs of text messages which I had sent or received, all the way back to at least Feb 2013.

    This just is more evidence that things like text messages don’t get deleted from that great cloud of the Internet.

    City WiFi in La Vista, NE

    18 May 2014

    I stopped for dinner last evening at a restaurant in La Vista, a western suburb of Omaha. My phone was about to fail, so I took my Nexus tablet in to try and catch up on the news.

    I was happy to see that the city of La Vista, NE has WiFi, and it worked great! I was able to deal with some emails, and catch up on the news I had missed while working.

    As my buddy Moe pointed out, La Vista is God’s Country, at least as far as WiFi is concerned. 🙂

    Icons and Dual Screens and XP and W7, Grrrr…

    12 May 2014

    I use a laptop and and external display, a lot. Sometimes the laptop is directly connected to the external display, and sometimes it is via a docking station.

    Microsoft can’t seem to avoid mucking with icon location. I’ve seen this behavior in XP forever, then in Vista, and hoped it would be fixed in W7. It wasn’t.

    What happens is that when the machine boots up, and detects the second display, and it’s in extend mode, it takes all or part of the icons that should be on the laptop main display, and SHOULD STAY THERE, but moves them to the extended display. It’s frustrating.

    I have to move the darn things back to the main display. You would think that I shouldn’t get worked up about it, but I’ve had icons disappear when the machine restarted or suspended/hibernated, and I’ve had mysterious crashes as well.

    Related (I guess) to this behavior: Windows will take the icons, which are neatly arranged all over the desktop, and bunch them all together along the left side of the main display.

    Supposedly, if you right-click the display and then “Refresh”, the icons will be locked in place. I think that’s crap.

    You would think that the crew at Microsoft could just maybe leave the damn icons alone, unless I move one!

    Hilton Hotels Is A Tracker Online

    19 April 2014

    I’ve noticed more and more targeted ads on some sites. Just now I was adding reviews to Urbanspoon, and the two ads that were included with the web page were for hotels in the Hilton chain that I have accessed via the Hilton HHonors website.

    Now, I know that marketing types are obsesses with “impressions” or whatever voodoo they call it. I wonder what exactly they are trying to accomplish by throwing multiple ads at me on the same page for a hotel I have booked in the past. Are they trying to induce me to take an unscheduled trip to Council Bluffs?

    So, marketeers… I am really your nightmare. I don’t pay attention to ads in general. Sure, I notice them, but I will go back to the HGI in the Bluffs not because of the ad(s), but because I found out own on my that it’s a fine hotel with a good rate.

    I also have seen Facebook targeted ads as well. Just now I saw one from Sportsmans Warehouse for Yak Traks, which I had been looking for before we went to the Grand Canyon in February.

    Online Food Ordering, Potbelly’s

    8 April 2014

    I’ve been buying stuff online for a long time, but yesterday was the first time I’ve bought food online. It was a good experience.

    I’m working this week in Richardson, TX, and we were trying to keep our testing moving along. We decided to order in food from a Potbelly’s Sandwich Shop which is about two blocks from here.

    I really liked the user interface. Click on a menu type (sandwich, sides, etc.) then select an item (a sandwich). You get another menu with options (toppings, etc.), and at the bottom is a text field and a pulldown, for entering a name, or adding the item to a name already entered. This was all intuitive and easy to do.

    Once done, the order total is presented, along with the subtotal for each person (a nice touch that made collecting money for lunch easy).

    At checkout, you could log in using an existing account, create a new account, or just give them a credit card (or pay at the store) and then checkout. I like those, BTW. I don’t like having to create an account for each place I do business at. My though here is that if you can routinely go in an pay with cash or credit and walk out, then you should be able to do that online. And it was even so here; I put my credit card on the SSL link and we were off.

    The website informed me to pick up the order at noon (they deliver also), and Gayle and I drove over there at 1155, and walked back in the building at 1205. The order was right on.

    So this was a good experience, fast and accurate. I liked it.

    From The Department Of Not Finishing Websites…

    29 March 2014

    We are going to Stillwater for dinner, and I decided to check out the menu at a good Mexican restaurant.

    Check out the caption of the picture of the food.

    "Shrimp Something With Something"?

    “Shrimp Something With Something”?

    The other seven photos have no caption at all.

    Now, the interesting thing about this was I noticed the button for other locations on the bottom. I clicked it, and found this website is for a chainlet in Ohio, not related to the Stillwater restaurant. I found the correct website very quickly, and updated the UrbanSpoon listing.

    Four Experiences Buying Online

    14 March 2014

    I’ve had four online buying experiences over the past couple weeks. This doesn’t include about a dozen eBay transactions, all of which went very smoothly.

    I bought some equipment from newegg.com and tigerdirect.com, and I bought a replacement door handle from carparts.com. In the Newegg transaction, what I bought came all the way from Hong Kong, in about four days. The stuff from Tiger Direct, it came from a warehouse in California, four days. The door handle, I think came from California also, six days.

    From the time I selected the stuff I bought until the time I completed the sale, in all cases, was less than five minutes.

    The fourth transaction was not nearly as good. I bought a new laptop for Raegan from Best Buy, using one of their trusted vendors, buy.com (which changed it’s name to or was bought by Rakuten). While the buying process was fast here as well, there were a couple glitches. The laptop was coming via FedEx.

    The FedEx driver tried to deliver Thursday evening, but we weren’t home; they left a door tag. I tried to get FedEx to requeue, since we had missed them by less than 10 minutes, but FedEx couldn’t. I asked them to let me pick it up at their local office, but they said that Buy.com had disallowed that. They would only deliver the next day. We were home the next day at 1600, and waiting in the living room. I was checking fedex.com, and at 1750, suddenly the status showed an attempted delivery. No truck came near our house; no door tag. I got on with FedEx and got a human. This person told me that leaving a door tag was a courtesy, and not a requirement for the driver. I asked if stopping the truck, walking up to the door, and ringing or knocking was a requirement, and she said no.

    WHAT THE HELL?

    A delivery service is not required to walk up to the delivery location? Side note: I called FedEx and lodged a complaint, and sent a feedback message, but NO response. Pretty crappy service.

    Also, FedEx would not redeliver on Saturday, they would not let me pick it up, they had no way to contact the driver. I personally think all of this was bullshit. They would let it be delivered for pickup to a FedEx Kinkos, and only at the nearest location in Edmond. I wasn’t going to take a chance that they would fail to try to deliver (three was the limit), so I had to drive all the way to downtown Edmond the next day.

    We got home and excitedly opened the box. The computer wouldn’t power up. I thought the battery might be shot, so I plugged it in to let it charge a bit. The front light that indicated charging didn’t light up. I was not encouraged. A lot of troubleshooting followed, but the bottom line was the laptop was DOA.

    Buy/Rakuten had no way to get in touch with a live person via telephone. I used an email form to report the issue as a defective product. Surprisingly, I had a response in about an hour. The person wanted to know the UPC and serial number. After getting them, I got a link to a site to print a UPS shipping tag. I reboxed the laptop, taped the sticker, and hauled it to UPS, dropped it off, and got a receipt.

    I immediately send the tracking number to Buy/Rakuten with a request to ship my replacement NOW. I got an email saying that it would be received and inspected and a replacement sent after six or seven days. I shot an immediate demand to ship a replacement NOW. Silence. I send several other emails, once a day, and at day three, upped it to demand an instant refund.

    To me it was easy. I’d already paid for junk. I sent it back, and they still had my money, and I had no laptop. They had the tracking number, and the weight matched. It was their fault. They should have shipped a replacement immediately.

    After a week, I got a reply, saying that no laptops were in stock, and refund would be made in 1-2 days. I immediately got on buy.com and found that they did have the same laptops in stock. So Buy/Rakuten lied to me.

    The only good news was that the refund showed the next day.

    I sure would not deal with Buy/Rakuten.

    FedEx also has a black eye in this with their fracked up delivery.

    Four online transactions, three went smoothly, one was bad. I don’t have an issue with a computer showing up bad. I don’t really think that it was well packed, so was probably damaged by shipping. But I do have a problem with the company not being responsive, especially for what was 100% their mistake.

    An Interesting Video Push

    8 March 2014

    Someone donated a Vizio smart TV to St. John’s. It got installed, I ran network and cable to it, and it’s working pretty well.

    One of the church members tried using his iPhone and iPad to push video to the display wirelessly; it didn’t work. He asked me about it, and I did a little research that (1) taught me a little about the Apple AirShow function, and (2) showed the Vizio didn’t support AirShow. I got this info back to the church patron, and mentioned that cables could be bought that would connect the iThings to the TV via the HDMI port.

    I was at St. John’s Thursday evening, and noticed that the Vizio had a new HDMI cable, connected to a plain box labeled Apple TV. Hmmm, I thought. I remembered reading about these boxes. I also remembered that Android machines could push to them.

    I did a quick Google search for “airplay for android”, and there were a LOT of hits for the “ZappoTV” app. It looked fairly safe, so I downloaded it.

    It took about 3 minutes, but that app connected my Galaxy S3 to the Apple TV box, and my S3 display was being replicated to the big Vizio! It was amazingly easy. The only thing was that I needed to restart both devices.

    I tried video and still photos. I guess that the app works by using your authenticators to get pages like Google+ or YouTube as a proxy, then sending that data to the AirPlay, which takes the streaming video and dumps it to the Vizio.

    So this was a cool example of cross-platform data sharing, and works pretty nice.

    A Data Oddity

    21 February 2014

    As a result of the ice storm we had over the holidays, the drivers side door handle on my LaCrosse was broken. One oddity of this car is that it has exactly one lock in the four doors, instead of locks on both the driver and passenger front doors.

    So since I can’t repair the door handle, I ordered a replacement. As expected, the GM price for the handle is about $80, and the various aftermarket sites charge around $16.

    But here is the odd thing. Every site I visited, to include GM (the OEM), had some variation of this:

    Replacement Door Handle
    Location: Front, Passenger Side, Exterior
    Material: Plastic
    Type: Exterior
    Door Lock Key Hole Provision: With keyhole

    Since the keyhole is on the drivers side door handle, this description is wrong. The photo accompanying the description, and the left-to-right orientation of the photos, is in every case correct. It’s just the description as being on the passenger side.

    Since this was at no less than six sites I visited, including GM, I would imagine that the source data from GM is wrong, and it has been replicated by the various aftermarketers.

    It will be interesting to see what I actually receive.

    Another Connectivity Option – Tethered

    20 February 2014

    My laptop wifi is not working for some reason. Windows 7 recognizes that there is a device there, but it’s greyed out, so I suspect it’s a configuration issue. I didn’t want to mess with it right now, so just for the heck of it, I remembered that my Galaxy S3 supports tethering. I connected a USB cable between the phone and laptop, navigated on the phone menu to tethering, turned it on, and just like that, I’m on the Internet.

    That was just way too easy. W7 supported a Linux-based phone for an IP service. That’s the way it’s supposed to work.

    An Example of Trail Map Accuracy

    16 February 2014

    This is not meant as a critical post. I know that people who put trail maps together do the best they can with their maps.

    For the record, I loved the hike at Bell Cow Lake!

    I used the online trail map, and Google Earth, to overlay our GPS track with the trail map. The technique is called georeferencing, and Google Earth does a great job. Here is what I ended up with:

    Bell Cow Lake Red Trail Georeferenced

    There are a couple things to note. The actual trail clearly does not match the trail marks. The trail goes outside the boundaries of the park a couple times (look at where the trail crosses the road on the lower right).

    If you look at the north section of the trail, there is a blue pushpin. The Redbud Trail is marked by red and white plastic strips tied to trees, but it intersects in a number of places with blue and white strips. I looked briefly at Google Earth , and those look like bypasses to shorten the Redbud Trail.

    I’m going to offer to send my GPS track to the City of Chandler, or maybe just generate a new trail map and send it to them. Now, that would mean I would need to go back and hike the Flat Rock Trail, and the rest of the Redbud and the “blue” sections… 🙂

    04 April 2015 Update

    So today I got back to Bell Cow!  A group of Boy and Girl Scouts hiked the Flat Rock trail, and I captured the GPS track.  Here is the map above, overlayed with the GPS track in orange.

    Bell Cow Lake north and south overlays

    Not surprisingly, the track captured by the GPS does not match the trail map.  One really different parameter is the trail mileage.  The track looks like it goes out to Point G, and that point is just over 5 miles from the trailhead, not 6.2 miles.  Just above Point G, where the trail goes pretty much east to west, there is another loop that starts and looks like it heads off to the NW.  I would guess that trail is closer to the 6.2 miles mark.

    As with the north side, there are a couple trail deviations that are not accounted for in the mileage.

    Regardless of GPS differences, this a a great place to hike.

    Hmmm, AT&T Did Something To My Phone

    14 February 2014

    I have a Galaxy S3, and my carrier is AT&T.  Last Thursday, the phone was persistently and annoyingly nagging me about a new (and apparently significant) update.

    I was on travel and didn’t want to upgrade my phone in the airport, so I kept canceling the update.  I let it update in the hotel that evening.

    Some of the phone pages changed.  But one thing that is apparent is that my battery life has dropped significantly.  I could usually rely on the phone to go from 100% charged down to 40% or so after a day of use, but the past couple days, it has plummeted south of 20% (today, by 1545).  So something Ma Bell has done isn’t good.

    Spam, A Downside to Blogging

    15 January 2014

    Since my blog is publicly accessible, I get occasional spam. I mark it as such and don’t give it much thought.

    Three years ago, I wrote a post about USAA insurance. It has had some legitimate commentary, but there is clearly some spam in there now that I read back over it.

    So over the past three days, I have had *40+* spam comments on that post, and none on any other posts. I bulk marked them as spam, but clearly someone wrote a script that flooded that one post on my site.

    There are a couple flavors of posts, but the largest variety are from a single first name (Hector, Carina), and they look like a single block of text that has had a thesaurus run against it:

    Hello everybody, here every one is sharing these kinds of familiarity, so it’s fastidious to read this webpage, and I used to visit this web site every day.

    I don’t know if their script is screwed up and was supposed to hit 40+ sites, or what the problem is, but whoever you are, bugger off, will ya?

    Trashed Hard Drive: Just One Of Those Things…

    11 January 2014

    I bought a new 500GB hard drive for my laptop back in December. It was only $54 from Micro Center in Dallas, and had a 7200rpm rotation speed. Pretty darn amazing. I needed the larger drive because the 80GB I had been using was full, mainly photos, ISOs, and the like. I cloned the 80GB drive over to the 500GB, then used GPartEd to expand the partition to the full disk size. Once done, I installed an upgrade version of Windows 7 over the Windows XP (which trashed the programs and data), then installed all new programs, and transferred most of the data from the 80GB drive to the 500GB drive, and I was in business. None of this took very long.

    One thing I had not transferred over was all of my GPS maps stored in Garmin Mapsource. Now, I got Mapsource with my Garmin GPS. I did not like the fact that Garmin sold topo maps for each state, for about $50 – $100 per map. A group at GPS File Depot has open-source Garmin-compatible maps for download that work fine with Mapsource, and I use their maps. I also donated $50 to them to support their effort. But the maps are installed into a database that Mapsource controls, and there isn’t any way for Mapsource to export the maps.

    There was also the matter of getting Mapsource on the new machine. I can’t find the CD I originally installed (that was 4+ years ago). You can download Mapsource from garmin.com, but it won’t install without some computer gymnastics, and there is still the matter of getting the stored maps transferred.

    I looked around a bit, and eventually found a tool called JaVaWa GMTK (I have no idea where JaVaWa comes from, and I think GMTK is Garmin Mapping Toolkit). This wonderful tool backs up maps stored in Mapsource, and imports them onto another machine. This also eliminates the need for the computer gymnastics mentioned above, as JaVaWa GMTK preps the disk for installation of Mapsource without needing an original CD.

    So… I popped my 80GB XP disk back in the laptop, loaded JaVaWa GMTK, and ran the export. Only one issue: JaVaWa GMTK reported that my Oklahoma and Arkansas topos shared a common map numbering for the SE part of Oklahoma, and for some reason, that was BAD. I removed the Arkansas map from the backup and ran it again. So far, so good. I shut the machine down, then remembered I needed something else done. I powered back up again, did what needed to be done, and got ready to shut down. Windows notified me that it had a couple updates to install. Here was my mistake. I let the updates install. The machine ran a couple extra minutes and then shut off.

    I put the 500GB W7 drive back in, got powered up, and in the meantime connected the XP drive to an external SATA-to-USB interface, plugged it in, and got… nothing. It usually fires up Explorer to let me look at the disk, but not this time. The drive showed E:, but W7 said I needed to format the disk. WTH?

    I swapped the disks, booted up to Windows XP (it’s a dual boot drive, with Fedora Linux 15 on the other partition) using the GRUB bootloader, got the XP splash screen, then a BSOD with Stop 24. Ugh.

    Now, I’ve had these kind of BSODs before. They are damned hard to fix. Microsoft puts little information out. I know that Stop 24 has to have a specific cause. Why Microsoft won’t publish those is a mystery. I’ve looked extensively for them. In this case I even read many, many comments from Microsoft Certified people, and people who claimed to work for Microsoft, and they were all basically guessing at the cause. I also saw Stop 7B. The consensus is that these Stops mean that NTFS is corrupt in some way. There was the usual weird set of causes, but most people seemed to think that they are a direct result of powering off the computer during some critical time.

    I fired up the Linux part of the disk. Worked fine.

    I’m convinced it was the updates that installed. I didn’t have any inadvertent power off.

    Most Microsoft people offered the advice to reinstall the OS. Not really what I wanted to do, since I was trying to get the map data off the disk. A secondary consensus opinion was to run chkdsk against the drive. So that’s what I decided to do.

    I plugged in the SATA-to-USB, got my drive E: again, and ran chkdsk e: /f against it. As near as I can tell, it ran for about six hours. At the end of process, I still could not read the disk. I restarted everything, and this time, the disk came up as a more-or-less readable E:.

    I quickly ran JaVaWa GMTK on the 500GB drive, and pointed the restore function at the directory on E: where I stored the backed up maps. JaVaWa GMTK ran about a half hour, and snagged all the maps. I then installed Mapsource, and I had everything back!

    I poked around on the disk, and the chkdsk function hadn’t managed to restore everything. My Microsoft Office was totally blown away (fortunately, the Office documents were in a different directory).

    So a near PITA was avoided. I am bugged that a Microsoft update was the likely cause of the issue, but chkdsk to the rescue. Mapquest is working fine on the new disk. I got another copy of Office 2010 through the Microsoft EPP for a decent price, so no pain there.

    Security? Who Needs It?

    2 January 2014

    With some time on my hands this evening, I decided to check out the wireless network for the hotel we are staying at. They have about a 3Mbps pipe out of here (probably a cable modem), and a number of wireless access points. I can see four of them. Every single one identifies itself, and every single one of them has the default admin password.

    I don’t know how long this hotel has been set up like this, but whoever did the setup needs to fix it. I intend to tell them about it tomorrow morning.

    House Internet Upgrade

    25 December 2013

    We live in NE Oklahoma City. When we moved here in 1997, we moved from a Cox Cable internet service area to one with little choice.

    We ended up going with a service called Sprint Broadband Direct. An antenna mounted on a building in downtown OKC put a decent 200-500Kbps signal out to an antenna mounted on our roof. Every couple years I would have to thin the trees on the south side of the house to improve the signal. We also just had Dish service for TV.

    Back around 2006, Cox Cable came into the area. I have a piece of the main cable that runs underground; the center conductor is 3/4″ of pure copper, and it has three overbraids, one of which is a thin sheet of solid copper. In 2008, Spring shut down. We signed up with Cox after looking at another wireless service that was distributed around the Edmond area.

    Cox brought us around 800Kbps-2Mbps at first. I noticed speed improvements over the years, and recently we were seeing consistent 4Mbps speeds. About a month ago, Cox sent an email saying that the service was upgraded, but we would need to replace our cable modem, which was a DOCSIS 2 device. Cox sells DOCSIS 3 (D3) devices, but at full price, or they would rent you one for about $10 a month.

    A side note: the Scientific Atlanta D2 device is cripple by Cox to only give out 1 IP address per installation; the modem can hand out a couple hundred. I found this out when testing a new house WiFi AP (see below).

    I figured I could find a good D3 modem from an online vendor for less, and a quick Google search showed devices in the $70 range. But yesterday afternoon at WalMart, I walked past a display of no less than six different D3 modems (I had no idea WM would sell those). The price ranges were $65-$85. I picked up a NetGear 150Mbps device for $75.

    When we got home, I did a speed test with the D2 modem and got 6.5Mbps. I hooked up with D3, got it registered through Cox, and immediately got 12.75Mbps down and 7.1Mbps down. A decent improvement. So we are in pretty good shape Internet-wise here in the house. Next, I am going to look at the same upgrade for the St. John’s connection, also through Cox.

    Since we have good Internet, we also need to distribute it around the house. When I tested the connection from the D2 and D3 modems, I disconnected them from the house network and ran a cable from the modem to my laptop, since I could stand by the modem (which is in the back room of the house, where the Cox cable comes in).

    We have a Motorola G wifi that connects to the cable modem, and then some computers are hardwired to it, and the rest connect via Wifi. I decided to give the Motorola to Ian when he goes back to college in January, and was going to look for a new house WiFi AP at some point. When I was in Dallas a couple weeks ago, I went by MicroCenter to pick up a new disk drive for my laptop (500GB, only $50!), and as I wandered through the store, they had an entire pallet of Tenda N Wifi APs for $15. Yes, fifteen dollars. I checked online, and no one really reported issues with the brand, so I bought one. I should have bought several.

    After I got back to OKC, I fired up the Tenda, configured it, and then put it and my existing G router on a switch that was connected to the old Cox modem. I could not get it to work with the Cox internet, while the WiFi stuff was doing great. After some experimentation, I found that the old Cox cable modem was limited to serving a single downstream device. The cable modem DHCPs to the downstream, and I could not make it give two. So I chained the Tenda downstream of my existing Motorola, and proceeded to play with it a couple days to make sure it was working OK.

    After the couple days, I took the Motorola out and connected the Tenda to the cable modem, and everything is working fine.

    I have a total of nine devices hooked into the WiFi now, and two wired. Speeds are impressive. Fast, and inexpensive. I like it.

    Something New, Signup Software

    24 November 2013

    Our Scout Troop 15 is looking at hosting an event to help Webelos Scouts earn some awards. This would require registration, and an on-line registration would be helpful. This would require the Cubs to enter their names and what awards they would want to work on. I thought that there would be some open source software to do this already.

    My first Google search for “open source signup software” ended up returning a lot of “event registration” software. OK, that made sense.

    There are a number of companies that do this sort of thing, with prices ranging in the couple bucks per person vicinity. Given that we aren’t looking to make money on this event, that wasn’t going to work.

    I looked at a number of hosted event software. Most of it was aimed at big seminars. But there was one that looked like it might work, a WordPress plugin called Event Espresso.

    First of all, I needed a local instance of WordPress. That HAD to be easy, I thought. I fired up the Fedora side of my laptop drive, and did a yum install wordpress. That worked fine, but, for some reason I could not figure out how to configure either WordPress or the MySql database that yum installed. I played with it for a bit (no more than 15 min) and went to bed.

    A couple days later, I came back to the search. I decided to see if WordPress ran on Windows. Microsoft has an installer for WordPress for Windows using the Microsoft Web Platform Installer. On my XP machine, it did a poor job of installing. I started an install one evening, it completed downloads, but the next morning it was still installing. Rather, it claimed to be installing, but there was no disk activity; the thing was hung. I tried it again using a W7 standard desktop; same thing. I did a clean W7 install on a spare disk, and same thing. It just did not work.

    I played with the WordPress on Fedora a bit more and had no luck. I had an inspiration, though, and looked for an quickly found a WordPress Live CD based on Debian. I installed it (after one detour, the thing requires a 64 bit architecture so I had to abandon my test machine for a more recent laptop and yet another spare disk), got it configured, and made a test post.

    Next was getting the Espresso plugin installed. This was done by uploading the Espresso zip file from my laptop to the WordPress server. I had three problems here.

    First, the destination directory on the server was set to mode 493 – writes disallowed. I changed it to 777.

    Second, WordPress requires an FTP server on the server. The Live CD didn’t enable one. I SSH’d into the machine, did an apt-get install ftpd. That got the FTP server running, but WordPress could not contact it. I remembered that ftpd does not allow root access, so back to SSH. I created a new user and made it part of the root group, and now WordPress could do what it needed.

    A side note: if I am able to upload the plugin zip file, why does the FTP server need to run? The upload file could be unpacked an installed from the server. That’s weird.

    Third, the plugin install process failed because the directory created by the WordPress plugin install process had the 493 mode again. So I changed it to 777 again.

    In the end, the test event page I created using Event Espresso worked fine, but the concept of “categories”, which I interpreted as being “I am Bill and I would like to sign up for the event, and I would like to take Course A and Course E”, was really “Category A is X type of event”. So it does not work for me.

    I will keep looking, but I am thinking that it will be just about as easy to write a static page, which would feed PHP and a flat file to generate a confirmation confirmation page, then a verification page. I’ve a couple months to get that done.

    24 November morning update:

    After another short round of looking, I ran across EventZilla, which does online event management. It’s still not exactly what I was looking for, but it looks like it will work for us. They don’t charge for free events (which is what we are doing). I built the site (including a couple custom logos that I built with the GIMP) in about 30 minutes. Pretty cool.

    A W7 Wireless Oddity

    4 November 2013

    I learned something new this evening doing a Windows 7 installation.

    I was installing W7 on my HP 6930p. I have had trouble installing XP and Vista on the 6930p in the past, but have never had an issue with the wifi device. In this case, I had no issues during the installation, and the wifi was correctly identified. The adapter showed up in the network devices list, and was enabled but it would not show any wireless networks.

    I did Windows diagnostics, and Windows reported the following error: “Windows couldn’t automatically bind the IP protocol stack to the network adapter”. So I was off to Google, and Bing. I bet there were 100 topics addressing this error. Most of the suggestions were along the lines of “Re-install the latest version of the driver”. One from a Microsoft tech rep (who obviously was not in possession of a single clue) said that the problem could be fixed by running a surface scan. I read most of the suggestions. Some of them I tried, but do no help. I didn’t focus a lot of work on this problem; I had a good wired connection that was working fine, and I’m just experimenting with this Windows installation.

    But I was working on something else here, and I had a flash that maybe the Windows Wireless Zero Configuration Service was not started. It took a couple extra steps, but I found the Services menu. Hmmm, no Wireless Zero Configuration Service. But… there was a WLAN AutoConfig that was disabled, right under the Wired AutoConfig that was enabled. I enabled the WLAN AutoConfig entry, then started it, waited a couple seconds, then unplugged my network cable.

    I watched the wireless icon in the taskbar pop up “Connections are available”, and then connected to my house wifi, and it was up and running.

    I think it’s kind of less than smart to find and install a wireless device, but not automagically start the service that makes it go. Live and learn.

    A Couple Good Online Purchasing Experiences

    29 September 2013

    I decided a couple months ago that it would be cool to make a “license plate” for each of our Troop 15 trailers. We use them to haul equipment to campouts.

    I sort of thought that I would use something like The GIMP to lay out some words and a Scout logo, then I would print the design out, maybe on some weather-resistant material, and glue it on a metal or plastic or wood carrier. First, I needed to find out how big a standard plate is. I searched on Google, and along with a couple references for plate size, got a couple hits for companies that make license tags, custom! It really wasn’t something I had considered.

    I checked online reviews, and one called BuildASign.com had a good reputation. I jumped on the site, and used an online WYSIWYG editor to create a design in about five minutes flat. I was pleasantly surprised that the tags were about $14 each, which I thought was very reasonable. I submitted the order, and the things showed up about five days later.

    This is what I built:

    TR15Tag

    The tags looked great on the back of the two trailers. They are printed on metal, so they should be very durable.

    I have had a bumper sticker that reads “I’m Proud Of My Boy Scout” on the back of my car for a while. When Ian was awarded his Eagle, I got another bumper sticker that reads “I’m Proud Of My Eagle Scout”, and put it right underneath the one I already had. Erin is working on her Girl Scout Silver Award, and I thought that I needed at least an “I’m Proud Of My Girl Scout” on the other side of the bumper, with room for a Silver Award sticker underneath it. I had bought both of the Boy Scout bumper stickers at the local BSA office. But… there was nothing similar at the local GS office.

    This led me to an online search. Nothing. But my experience with the license tags made me think of the possibility of finding a bumper sticker maker. And it was even so. I found an outfit called Zazzle.com that specialized in custom bumper stickers. I used another WYSIWYG editor to build a nice example:

    GS_BS_3

    I ordered two of these (one for my car and one for Raegan for her car). It was $4 each, again, very reasonable. They arrived in five days!

    So the new bumper sticker is on my car; we will put the other one on when we get her car back from being worked on.

    I think that both of these experiences were very positive. The quality was fine, the speed of delivery more than satisfactory, and the price quite reasonable.

    27 October 2013 Update

    Our daughter Erin earned her Girl Scout Silver Award, and in the spirit of the Boy Scout Eagle Scout bumper sticker, I went back to Zazzle and got this:

    GS_BS_Silver

    While I was there, I noticed they did business cards also. I used the same sort of WYSIWYG work for about 10 minutes and ordered 100 of those. The packages with the completed bumper stickers and caards arrived four days later! Quality was as expected. I am very happy with the results.

    One More Privacy Erosion

    11 July 2013

    As I drove home this evening along I-35, I was passed by an OKC Police Department car. I don’t think it was a normal car, as it was not sporting the dual network hotspot antennas that most public safety vehicles here have.

    This one was equipped with four cameras, two each on the corners of the roof, pointing about 30 degrees off-axis to the driving direction of the car.

    I imagine these are the latest in official surveillance of as much of the public as they can get away with, cameras to scan license tags.

    These cameras are supposedly used for looking for stolen vehicles, or vehicles belonging to people with warrants. What we are not told is what is done with the data collected, how it is retained, and the like.

    Although I am fully aware of the doctrine that we have no assurance of privacy while motoring around, I believe that unlimited data collection by “security” agencies is contrary to liberty.

    I am opposed to the spending of taxpayer money so that police agencies can more easily spy on us.

    Replacing Garmin MapSource

    12 June 2013

    I use a Garmin GPS-60 when I backpack or hike. The unit came with Garmin MapSource, which I used for several years exclusively. Eventually MapSource was replaced by Garmin Basecamp. Both would extract GPS tracks with relative ease. One thing I didn’t like was Garmin wanting huge dollars for topo maps. They ran $50 PER STATE! I found a project that had digitized maps for all states (and a lot of international locations) and converted them into the format needed by MapSource/Basecamp. They asked for $15 donations, I sent them $50.

    But occasionally I forget to bring my Garmin GPS. I use my Samsung Galaxy S3 with the Runkeeper app in that case, which uploads GPS tracks to the Runkeeper website, and can export GPX files. BUT: I discovered quickly that MapSource/Basecamp only accepts tracks from Garmin GPSs, not general GPX files.

    I found out from my hike yesterday that when you use Runkeeper, and use the pause function, then restart the app again (we hiked a loop, paused Runkeeper, drove to another trail, restarted Runkeeper and hiked that loop), the uploaded tracks are joined on the Runkeeper website. So I need a function to separate the single GPS track into two. This is pretty common. When a GPS unit loses lock, or is turned off, you need to be able to edit the track information to join segments due to signal loss, or delete spurious track pieces that are generated during the GPS unit startup). MapSource/Basecamp does this pretty well, but again, only for tracks downloaded from a Garmin GPS.

    BTW, the issue with MapSource/Basecamp not importing GPX data is clearly an administrative decision made by Garmin. It also keeps me from viewing downloaded GPS tracks to “preview” hikes. So to the Garmin corporation, a general observation: you suck.

    I played around with a number of other GPS programs, including those that support GPX files, such as Google Earth, and Open Street Maps. These didn’t do a good job of letting me edit, or they didn’t produce a good altitude plot, or there was some other problem. One good program that had promise was ExpertGPS. It’s $74, but has good topo map support, and would get aerial imagery and overlay it. I got an idea, and had it import the GPX that was produced by RunKeeper. No problem there. I connected my GPS, and transferred the GPX into it. No problem, and the track now showed up on the GPS60. The only issue I had was when ExpertGPS told me that my GPS60 would only take 750 data points, and so the track needed to be edited down. ExpertGPS also told me it had a function to do that, and it worked.

    Next, I shut down the ExpertGPS program, and fired up MapSource. I told it to transfer the data from the GPS60, and it did!

    So I successfully “laundered” RunKeeper GPS tracks from my Galaxy S3 through the GPS60 and ExpertGPS to MapSource. From there, I was able to make good topo maps, and altitude maps.

    I downloaded and installed the EasyGPS program, and was able to do the same laundering thing.

    So I am able to use my RunKeeper tracks, or tracks that I have downloaded from the Internet, on MapSource, in spite of the deliberate lack of support from Garmin.

    Data wants to be freely usable!

    Troubleshooting an HP G72 Laptop

    29 May 2013

    I have an HP G72 laptop, loaded with Windows 7 Home Premium, here at the house that has had a series of erratic behavior. There have been problems with the built-in Wifi, the CD/DVD writer, and the display. The problems were intermittent, which makes it very difficult to troubleshoot. The display in particular was troublesome; it would shut off at random intervals ranging from power-on plus one minute, to hours. After checking it out repeatedly over a couple months, the laptops owner also had HP check it out and report that there weren’t any issues. After the problems, particularly with the display, got so bad over a short period of time, he declared the machine fried, gave it to me to recover his files, and bought a new one, to which the files were transferred. I let the machine sit for a bit, and then decided to see if I could part it out.

    I wanted to beat the machine a bit before I started parting it out, so I got my trusty System Rescue CD, booted the machine from it, and started exercising it. I expected it to fail in short order. WiFi came up just fine, so did the display and the CD/DVD. And it didn’t fail. I let it sit for almost two full weeks, with a script loading Open Office, the GIMP, and several other large programs, over and over. It never failed. Hmmm….

    I started up Windows again. The machine wouldn’t come up on WiFi, and less than two minutes later the display shut off. I spent a couple hours running System Rescue CD, then Windows, then Trinity Rescue, then Windows, then a Fedora Live CD, then Windows. In every case, the machine booted and ran with no problem from the various Linux CDs/DVDs, and then failed when running Windows. By this time, it was clear that I had a flaky Windows installation.

    I reloaded every driver. I tried updating Windows. The machine was still flaky. I looked online for hours for some way to enable a log that would show what DLL/OCX/whatever was having problems, with no luck. Finally, I decided that I needed to try a repair action.

    I obtained a Windows 7 Home Edition DVD. I tried seven or eight times to effect a repair. This failed in every case; sometimes I was thawarted by a missing DVD drive, but most times the repair would fail when Windows demanded to go online and get the most recent update files; it would be in “please wait” mode for long periods (once overnight). When I tried to get the repair process to run without the online update, the repair would fail with a completely non-helpful error that it could not complete the process.

    I decided that a complete reinstall was needed. This literally took 20 minutes. I let the machine sit overnight; it kept running. Two things I noticed: the display was set to 800×600 and the generic VGA driver; and the WiFi was an unknown device.

    At this point, I decided to do the Windows activation process. This failed. I wondered if this was due to the hardware mismatch between the previous activiation and the missing display and WiFi. I decided to let Windows 7 start updating (I had the laptop connected to my house network wired), and then I would go look for the drivers at HP.

    While updates were going on, I went to HP and saw that there were about 12 different G72 versions, with different WiFi devices. It was getting late, and I decided to mess with it later. The next afternoon, Windows had updated and rebooted twice, and I decided to look for the WiFi driver. I sort of noticed offhand that the display looked a lot better. I checked the resolution; it was in 1600×900 mode! I pulled up the System app, and the proper display driver was in place. There weren’t any of the dreaded Unknown Devices, not even WiFi. I saw the disconnected WiFi icon in the system tray, and it connected pretty much instantly to the house WiFi.

    I used the WiFi to download a couple apps like LibreOffice, the various Adobe readers and players, and Firefox. I also noticed that I had IE 10 on the machine.

    I decided to re-try Activation, and it worked. The machine has been sitting now for about four days and is solid as a rock.

    Observations:

  • I’ve had several machines in the past six months or so, either Windows 7 or Vista, that have been flaky. I think that they are OS issues.
  • Windows needs to enable logging so that a bad DLL or program can be isolated.
  • Windows activation will have problems if you change your wifi and display drivers.
  • Windows 7 updates are smart enough to get the correct driver for at least some unknown laptop devices.
  • I’m guessing I will see at least a couple more Vista and 7 machines with flaky behavior. Overall, I think that the machines are at least as stable as XP, but when they have problems, they go flaky before they crater.

    An Interesting RADAR Artifact

    31 March 2013

    I was looking at the wx on Channel 9 this evening. I noticed a small echo in north Oklahoma that didn’t seem to be a storm. I looked at the NWS site for OKC, didn’t see it, but switched to the Vance AFB site, and there it was. This is a screen capture:

    Echo_near_Enid

    I checked the Google map and satellite image of the area, there doesn’t seem to be a lot there. I wonder if it is smoke from a grass fire. It changes geometry, but the east and west ranges don’t move. Smoke usually has a fixed point on the upstream end, and then widens and lengthens with time.

    I’ll check the news tomorrow and see if anything is mentioned.

    31 March update:

    I sent an email to the NWS office in Norman, and got a very nice reply back in just a couple hours (that’s a professional!). The echo is from a wind farm that was built in the last half of 2012. The wind farm is marked on aviation maps already, but it doesn’t show up in Google Earth images yet. Very cool.

    Hooray for DFW Airport!

    5 December 2012

    Kudos to DFW airport. They converted from the previous pay-per-use system for wifi to free access. Since I spend more time at DFW than most any other airport, I’m happy about this. It gives me a more convenient way to get some work done on a long layover or weather delay.

    Taking Apart An LED TV

    10 November 2012

    This one hurts to write. We bought a 47″ LG LED Smart TV last January. It was a nice TV, bright picture, and it would play streaming video. Very cool. I had it on it’s stand in the den, and then a couple months ago one of the cats tried to leap on top of it, found the couple inches thickness too narrow to stand on, and managed to push the TV off for an almost-four-foot fall to the floor. The think was sporting multiple impact spiderwebbing on the display, and is useless. It would cost more than a new (larger!) TV to fix it, so I reluctantly took it apart. It was remarkably simple in terms of the number of parts.

    Since I was wanting to take a look at the processing end of the TV first, that’s what I went for. It’s worth noting that the entire disassembly was done with a #2 Phillips screwdriver.

    There are a total of seven PWBs in the TV. Four of them were small; two drove the LED array, and the other two, I think, drove the status LEDs in the bezel and handled the IR remote. Here are the other three.

    The PWB to the left is, I think, a driver for the four speakers in the TV. The center is a CPU board. It as all the inputs and outputs for video and sound. There was no obvious GPU, but I’m going to look up some of the part numbers, but I suspect that all the processing is done via custom IC. The board to the right is the power board.

    The business end of the TV is here. It is built of a set of layers that are fixed to a fairly rigid metal frame.

    The three PWBs were mounted to the back of the frame. The frame has an array of white LEDs inset (there is a closeup below), which are the backlight for the TV. The white LEDs are on the other side of the frame also.

    The first layer is basically a white piece of plastic. I imagine it is essentially a reflecting surface to make sure all the white light is headed towards the viewer.

    Next is a translucent panel that has grooves machined into one side. I looked at it using a magnifying glass, and it looks like the grooves are at a 45deg angle to the vertical. This panel would reflect the white light from the LEDs out towards the viewer.

    On top of the translucent panel are a pair of translucent panels that are not grooved. I speculate that these two are diffusers to soften the LED light from the grooves to make it more even.

    Finally, the last panel is the LED array, known as the “glass”. This part is seriously cracked.

    There were other mechanical parts that I didn’t pay a lot of attention to, as they are very straightforward, such as the stand, trim pieces, the bezel, and the like.

    This is a closeup of the backlight LEDs.

    So ends our nice TV. We’ve been making do with a 19″ panel until we buy a new one. In case you are wondering, yes, the cat is still alive. My lesson learned here is to mount the TV to the bookshelves in back of it next time. Oh well.

    Voting Machines And Election Risk

    7 November 2012

    I have written about this topic before, and it bears some more discussion no matter how the elections today turn out.

    Voting machines that do not produce a paper trail, and do not have their software and hardware reviewed by independent experts, are a menace to our country.

    There have been numerous anecdotes during this election day of electronic voting machines not recording votes properly. As I have said before, any voting machine that is not subject to formal, independent inspection of the source code, and testing, should not be used for a public election in any way.

    No voting machine should be connected to a network while being used operationally. The risk of an external connection being used to penetrate the machine during voting is just too high. I could understand connecting to the machine at the conclusion of voting to download results using a laptop or other handheld device under the control of an elections official. A truly local LAN being used would be OK as well, just don’t have any external telecom connections.

    Any voting should have a paper or other nonvolatile backup. The system we use in Oklahoma is a good example here; a paper ballot that is electronically scanned and counted; the paper ballot is retained and can be used as an audit.

    Machines should be owned by the local election board or equivalent. If maintenance needs to be performed, then the people doing it need to be essentially “cleared”, then the machine checked by an independent expert again.

    Paperless elections are right out. If a mostly-paperless solution is insisted on (like touchscreen voting), then a paper receipt must be provided to the voter.

    Laws should be promulgated to ensure these safeguards are in place all across the nations. The security of our national, state, and local elections demand no less.

    Nvidia Drove Me Slightly Nuts

    13 September 2012

    I hung a projector in the St. John’s computer lab, and wanted the computer lab teacher computer to drive it. I wanted a dual-monitor setup so the flat panel on the table would be extended to the projector. This requires either a VGA splitter (which doesn’t support extending), or a dual-port card (say, with a DVI and a VGA port), or two cards in the machine. I had a dual-port Nvidia GeForce FX 5200 card that I thought would do the trick.

    But first, I had a Gateway E4100 at the lab teacher position. This computer had a motherboard VGA port that was disabled, and an add-on AGP 8X VGA card. The first thing I tried to do was turn on the motherboard port. Usually this is one in BIOS. Not a bit of luck. I downloaded the Intel manual for the mobo and read it to try to find a setting, or a jumper, or anything. Nothing. Somehow, the Air Force (which bought the computer originally, before we got it as a donation) had the mobo video disabled, I wish I knew how.

    Try two. I pulled the existing single-port AGP card and replaced it with the FX 5200. It came up OK and had both LCD and projector video. I had some “serious problem” reports, and traced them to USB drivers for devices that were not in the lab any more. I took the offending drivers out, and now the system was fully stable. It only had XP SP2, so I decided to upgrade it. I took the machine home to work on it there, and it spent a couple of hours in the house getting SP3. After SP3 was in, I rebooted, and the machine utterly failed. It would come up in safe mode. I spent some time deleting a lot of stuff that I thought the machine didn’t need, then fired it up again, and it died again. And again. Safe mode was still working, but erratically.

    Finally I got frustrated with the Gateway, and started again on Try three, with a generic PC that we had bought back in 2005. This machine only had SP1 on it, so I started an update. It was the same thing, the card died after SP3 was loaded.

    Now, I didn’t realize those two SP3 items at the time. Both of these machines had been loaded with a LOT of weird software, and had many installs and uninstalls over the past years. I thought that was the cause of the instability.

    At home, we have a Dell Dimension 4600 sitting and not doing anything. I decided to try it. It needed a disk, so I slammed a spare 80GB in and started a clean XP load. It was all working fine, until I updated to… SP3. At this point I realized I had a trend. I did some research on compatibility between the FX 5200 and XP SP3, and didn’t find any indications of a known problem (that’s after reading release notes, driver notes, forums, etc.). I thought about it for a bit, and decided to see what would happen if I just killed the Nvidia driver.

    So I booted into safe mode, started Add/Remove Programs, and deleted the drivers. Second, I went into System and Device Manager, and deleted the Nvidia card under Display Adapters. Third, I did a scan for new hardware. Windows found the Nvidia card, asked if it could hit the internet for drivers, I said NO, and it installed a generic VGA driver. Finally, I moved the generic Windows Nvidia driver off to another directory. Now, I left the computer up and running in safe mode for a while, downloaded some stuff, and ran some programs. All working well. I rebooted eventually, both the monitors came up, and Windows booted all the way and is working well.

    I’ve since loaded some more software on the machine, and done some more updates, and now the machine has been running several hours driving two screens, just like I want.

    So there is some bad incompatibility between the Nvidia GeForce FX 5200 card, XP SP3, and the Nvidia driver software. The Microsoft generic driver also has some issue. Zorching them and using the lowest-common-denominator VGA driver is a less-than-elegant workaround, but I have two monitors with 1024×768, and that’s the bottom line.

    WiFi at… Lowes?

    11 September 2012

    I’ve been in Lowe’s in three states over the past month or so. In each case, there was a sign on the building that stated that the location offered free WiFi.

    I connected to it via my Blackberry, and it had decent throughput.

    I think this cool. More and more restaurants offer WiFi, most hotels (and a lot of those are free), airports, and now Lowes. WiFi isn’t quite ubiquitous, but it’s getting very easy to find.

    LCD Monitor Stands Very Inexpensively

    2 September 2012

    Regular readers know that I’m the IT Support Department for St. John’s Episcopal School and Church. Since I am a volunteer, the budget for the IT Department is also pretty low. I buy parts when I need to, but will re-use, salvage, and scrounge when able.

    It is even so when it came to some LCDs we had donated from the company of one of our patrons. We got about 40 LCDs. Some of them had plastic/metal stands, most of them were attached to swinging arms that bolt to a desktop. Nine of the LCDs had nothing. These were all Dell LCDs in the 17″ and 19″ class.

    I went online to look for stands. YIKES! I found stands on eBay for $100+. EACH. Retail, the stands were $220+. Clearly, something else needed to be done. I took a close look at the plastic/metal stands and got the idea I could replicate them. The thing I realized was that the center of gravity of the LCD had to be over the center of the base. Once I got that idea in my head, I went out to the garage and built a prototype in about a half hour. Total cost in materials was about $6.

    The tradeoff is that the monitor will not tilt up and down. It swings left to right easily by moving the base. Up and down can be compensated for by increasing or decreasing the arm of the stand.

    The stand has three parts:

    The base is at the lower left, the arm to the right, and the plate at the upper left.

    The base is made from 1/2″ or 3/4″ thick MDF, about 6″ x 12″ (plywood would work as well). The arm is made from a 2×4, and is 12″ to 14″ long. The critical part is the plate. The plate has to fit into a recessed area on the back of the LCD. There are four screws that attach the plate to the LCD. The plate for the Dell LCDs measures 4-5/8″ square. I used one of the Dell metal plates for the template. This is the Dell plate:

    My first plate was made out of 3/4″ MDF. When I drilled the holes for the screws through it, it was so close to the edge that the MDF delaminated (if I had a sacrificial surface like a scrap of wood underneath it probably would have been a clean drill). I changed over to #2 1x wood for the rest of the plates; I used a 1×6 that I cut down to the right size.

    Cut all the pieces out. I used a table saw, but a hand saw, jig saw, or circular saw would also work. The only critical tolerances are for the plate. The Dells have a recessed area; the minimum cut has to have room for the four mounting bolts, some extra, but not any bigger than the recessed area.

    Drill four holes for the mounting bolts (I used a 1/8″ drill bit for all of this; the drill bit was sized based on the holes in the corners of the metal Dell plate). Also drill four holes in the center part of the plate for the screws that will hold the plate to the arm (those screws are visible in the photo below).

    Drill a couple holes in the base (I held the arm on top of the base and drew its outline); I used three screws.

    As Norm Abrams would say, now for some assembly! I used 1-1/4″ drywall screws for the assembly. First, drive screws through the base into the bottom of the arm (after this, you will see why I used three of them). Next, position the plate on the arm and drive screws through the four holes into the arm.

    It should look like this:

    That’s some good work, isn’t it?

    Paint the stands if you want. I painted them with a couple coats of black oil-based acrylic.

    To attach the LCD, get bolts of the right length, run them through some washers, then through the wooden mounting plate to the nuts in the LCD, and tighten them down. The bolts that came with the LCDs were metric, M4.7 x 8mm, which were too short for the 3/4″ thickness of the plate. I went to Home Depot and bought sets of M4.7 x 25mm ($1.37 per pair).

    So the total cost of each stand was about $6 for each stand. The biggest cost was the four metric bolts at just under $3. The rest of the cost was seven 1-1/4″ drywall screws, and a quart of paint (the single biggest cost at $12, of which I used about 1/3).

    The stands are sturdy and don’t look too bad. I ran some coarse sandpaper over the edges to ease them a little.

    This is what one looks like assembled:

    This was a fun little project that got some small pieces of 2×4, 1×6, and MDF out of my garage. The total time to to cut and assemble the pieces was less than 45 minutes. Painting was quick; the time for the two coats of paint to dry was a lot longer than cut and assemble.

    And I saved us about $1200!

    IT Cost Keeps Dropping

    18 August 2012

    I am continually amazed at how inexpensive network equipment continues to get. Last year we bought a set of N wifi devices for the school, for $30 each. One has failed (not the wifi part, the uplink port is very noisy and dropping 70-80% of the packets), so I moved a less-used device to that location, and went looking for a replacement. The first place I checked, TigerDirect, had a host of N devices in the $35 range. I looked briefly at eBay, and found NIB devices for $20 instantly, and used devices starting at $0.01 (but they seemed to go out around $15; still…).

    I had also looked at eBay and TigerDirect for small (5- to 8-port Gb switches) as I am looking at a bit of network reconfiguration, and needed to start with my first upstream switch from our new server. I had thought I would spend $100 or more, but immediately found a five-port for $18. They were even less on eBay.

    I might be able to get all the big switches in the school to Gb for around $100 (Cat 6 cabling is another issue altogether, but the short patches are a buck or so).

    A Close Brush With Scareware

    13 August 2012

    OK, I’ve got to say up front: I think that some virus and other malware writers should be stood up against a wall at noon and shot.

    I’ve told people for years that they needed to practice safe computing and use antivirus software and keep it updated. I recommend Microsoft Security Essentials for a couple reasons. First, it works pretty well. Second, it’s free, and kept up to date pretty well by Microsoft. Third, given the well-known vulnerabilities in Microsoft software, Microsoft should provide a free AV capability.

    I’ve spent literally hundreds of hours clearing up virii infections from various computers. Probably the worst one was my Moms computer. She visited gambling site, free-coupons sites, free-stuff sites, and many other something-for-nothing sites, and her computer picked up so much digital smegma that it was wasting 80%+ of its CPU on the junk. After the third time cleaning off her machine (the second time time by just reinstalling Windows), I ended up putting a version of Ubuntu on her computer, and then I put a skin on it that looked just like Windows XP. No problems after that. I’ve worked on 20+ machines from friends and relatives getting the e-crud out of them.

    Yesterday, Raegan called me in. She knows bullcrap when she sees it, and she had had a number of windows pop up proclaiming her computer had a virus. I know from a look that she had a “scareware” app. The scareware is just another malware; if you do the “scan” it requests, it’s permission for the crap to install itself on your computer, and then you have to send the SOB developers money to get it off.

    I powered her system down. Then, having worked on four other machines that were similarly infected, I pulled the drive out so I could sanitize it in offline mode.

    Of the other four machines that I’ve seen this on, I was able to completely clean two, one I cleaned but it was still severely affected (missing programs, missing files), and one was so fracked up I had to re-install the OS (I switched it from XP to Ubuntu).

    So I updated the Microsoft Security Essentials AV definitions, then the AVG, then Ad-Aware, and finally Spybot. Then I hooked her drive to an external USB-to-SATA interface I have, powered the drive up, and started scanning. MSE found and cleaned off the offending virus, which had infected the boot sector as well as tied itself into the machine startup in the registry. It took about five hours to fully scan and clean her 1.5TB drive (that had about 600GB of stuff). I repeated the process with AVG, which false-alerted on some stuff I know to be benign. I decided to not run A-A or Spybot.

    After putting the disk back into her machine, XP took a couple passes of CHKDSK, found some files damaged, and finally it all booted up, and looks pretty good. The only damage we have found so far is that the right half of her Start Menu stuff (My Computer, Control Panel, etc.) was missing. I found that that stuff can be restored by clicking Start, then the Properties, then Customize, and finally Advanced.

    So after an hour or so of actual work, and about 10 hours of scanning, it seems that her machine will survive the scareware. Raegan is careful, and doesn’t hit any nasty sites. We tried to figure out where the crap came from, and we think it was a site that had a download for a word search puzzle generator, as it was only one of two sites that she had visited immediately prior to the infection. It looks like the crap was put there before her MSE could recognize the signature of the downloaded malware.

    I have no problem opening a machine up and pulling the hard drive to scan offline (and I have the stuff to be able to do just about any drive, IDE, SATA, or SCSI), but most people don’t, and have to scan and try to fix the problem in situ, with the malware running also. That just makes it more difficult.

    But I would like to see the perpetrators have something nasty happen to them.

    A Small Open Source Win – and Windows Fail

    24 July 2012

    I needed to zip together a largish number of photos to upload to a server, from my Windows XP machine. Last night I started the process about 2100 by selecting the photos in the directory with Ctrl-A, then right-clicking “Send to compressed (zipped) folder”, and letting it run. I started doing other work and forgot about the compression. This morning at 0730, I noticed it was still running!

    I clicked Cancel to stop the process, and looked at the zip file so far. Or, tried to. It was corrupt and would not open. I figured that the Windows built-in zipper was having problems.

    I used to own WinZip (I had bought a license), but stopped downloading and using it when they went commercial a couple years ago. I used 7-Zip for the school machines and it worked well. So I downloaded and installed 7-Zip in about a minute flat.

    I selected the 322 files again, right-clicked and saw 7-Zip on the context menu, and told it to “Add to [filename].zip”. It started working. I watched it for a minute, and figured it would run for a while, as the 322 photos were about 6MB each. I told it to run in the background, and minimized it. It ended up running about three (3) minutes.

    So now I have a zip of the 322 files that is about 1.86GB in size, and it didn’t take all night to create. 7-Zip is far more robust than the built-in Windows zip tool, and it far less expensive than WinZip. Great tool.

    Electronics and the Wilderness

    28 June 2012

    I read an article today on Time.com. The article basically talks about an uptick in backcountry travelers who get into trouble due to a lack of basic equipment, instead relying on their electronics for everything from maps to flashlights.

    This article struck a chord with me. I’ve rescued a woman while hiking from the north rim of the Grand Canyon. She had her sneakers fall apart about 2000 ft down in the canyon, and she had one small bottle of water and no food, and serious blisters. I shared my food and water, and purified more water for her using the chemical purification I carried.

    I got stuck near Feather Falls in California a couple years ago, after dark, with no headlamp or flashlight. I had marked the trailhead on my GPS, and used the GPS to navigate back in the dark (the GPS pointed me towards both the trailhead, and the bread crumb track of the trail I had walked out, and I intercepted that trail in about 20 minutes). If I had been equipped with a flashlight, I wouldn’t have missed the trail to begin with.

    So I don’t always hike with a rope. But I usually have a map and compass, flashlight, water, GPS, something to start a fire, and small medkit. I also usually carry five or so food bars, which is a couple meals. My cellphone also acts as a flashlight ( 🙂 ). But I also have enough miles in the outdoors that I an not terribly worried about getting lost.

    An interesting counterpoint. An issue of Backpacker magazine a couple months ago recommended that backcountry travelers ditch the maps and the external GPS, and use a tablet, which was touted as being able to hold not only the map of the area you were hiking, but every map of the planet, and reading matter for those nights in camp. One part of the rationale was that a paperback book weighed a bit more than most tablets, and the tablet could hold a lot more. Of course, for an overnight or couple of day trek, that might work out, and even then, one of my hike buddies has already used a solar recharger on the trail in Yosemite. I’d be a bit worried about keeping water out of a tablet, also.

    So I think that for anything more than a dayhike, you ought to carry a paper map. On my last backpacking trip a couple weeks ago, I had a GPS loaded with the topo map and track of the area I was hiking in Arkansas, but I also had a compass, as did Ian, and three paper maps.

    So this electronics geek, and backwoods geek, likes low-tech for the ultimate fallback.

    A Cool Photo Capability

    13 June 2012

    I bought a new camera a couple months ago. It’s a Sony Cybershot DSC-W690. Very light, but with a 16MP sensor, amazing low-light capability, video, and a 10x optical zoom. All for $170. Amazing. And very easy on the battery also.

    I took a lot of photos on a backpacking trip this past weekend. A number of the photos I took were “sideways”: I turned the camera 90 degrees on some of the photos. I did this because the camera field of view is wider than it is tall, and I wanted to show more up and down for some photos than left and right. This is an example.

    I dumped the photos to Picasa, which I have previously written about in admiring terms. It was a quick and easy upload as before.

    First, I used the camera to take a couple auto-panoramas. This is a cool feature itself. I screwed up a couple of the panos, and had left the right side of the panos blank (they were black). I would usually have used the GIMP to crop out the black part of the photo, and then uploaded it again. But I realized that a crop function was available right on the Picasa site, and it was just as easy to use as the crop-to-selection function in GIMP. So that was cool.

    But I knew that I had taken the rotated photos with the camera, and when I looked for them on the upload, there were none. Hmmm, I thought.

    I looked at the directory of photos on my hard drive. In thumbnail mode, I saw there were about six of them that were rotated as expected. I double-clicked one to preview it, and… it came up “properly” oriented, in other words, the same way I had rotated the camera to take the photo.

    This took me off to look at the EXIF metadata stored with each photo, and sure enough, there is a flag in there that shows the camera rotation. So far, Paint and Windows Picture and Fax Viewer don’t rotate the photo. GIMP 2.6 notes the rotation and asks me if I want to auto-rotate it. Picasa (both local and online) auto-rotate it without asking.

    So this is pretty cool. My camera apparently has a tilt sensor or accelerometer in it. I like it, it’s a pretty amazing camera.

    An Annoying Windows Bug

    4 June 2012

    I got very annoyed when XP Pro introduced a “feature” that automatically shuts off wifi whenever a wired connection was active. There is no good reason for this. I used the ability to share connections.

    Another thing is vexing me now. My machine was docked, and I did a suspend. Now Windows refuses to re-enable the wifi. It requires an admin credential to re-enable. This persists through a complete restart.

    In the past, I have to boot off another media (like System Rescue CD), re-enable the wifi, and then restart Windows.

    So thanks, Microsoft, for doing everything possible to limit connectivity.

    T-Mobile and Tethering

    3 June 2012

    In the past, I’ve tethered my laptop to my Blackberry with a fair amount of success. In particular, I stayed 14 weeks in an Embassy Suites in Dallas (Park Central) last year, and the hotel wifi was so bad, I asked for a room on the north side so I could poach from the Holiday Inn Express on that side. When that didn’t work, I would tether my computer to the phone, and I could get a megabit per second consistently (it was solid 3G).

    I haven’t done that for a while, until yesterday. I couldn’t get good wifi where I was, so I tethered. This time, T-Mobile put up an intercept page, and offered to allow me to pay for the privilege.

    This sucks. I am paying for a data plan, and it’s unlimited. Whether I am looking at it on my screen, putting it on a SIM and moving it to another machine, or moving it over a USB cable, it’s my data. It’s crappy of T-Mobile to start wanting to soak me for using a built-in capability of my laptop and Blackberry to share my data.

    It’s another reason to drop T-Mobile as soon as I can.

    Easy Printing

    1 June 2012

    I needed to print a document this morning. I was sitting in a car in a parking lot in downtown Edmond.

    Fortunately, downtown Edmond has wifi. Raegan emailed me the file that needed printing. I downloaded it, played with the column settings a bit, and exported it as a PDF to a flash drive.

    There was a FedEx Kinkos about three blocks away, so I walked over there.

    A very slick setup at the self-serve printer/copier. It sucks your credit card in, prompts you to plug in your flash drives, you select a file using the touchscreen, it previews, and prints.

    I was in and out in three minutes with the four pages I needed. Total cost: $0.44.

    There was an option to print from a smartphone also (and I had the file on my Blackberry).

    So this was convenient and inexpensive, a good experience.

    Adventures in Video Editing

    29 May 2012

    A couple weeks ago, the kids were in their Spring piano recital. At the Fall recital, I volunteered to set up a camera so that people in the back of the large hall could see the kids playing using the halls overhead projector. I also decided I would record the recital digitally.

    I have a PCI-based video capture device, but I needed one a bit more portable (USB interface) so I could use my laptop. I bought one from Best Buy, but it would not work with my laptop. They didn’t have any others, so I just ended up using an 8mm camera to record, and put the video out analog.

    I brought the 8mm home, hooked it up to my Pinnacle PCI card, and started capturing. The captured video was way off color-wise. I tried some other video, it was also. I reset the card, cleaned it, tried the input color adjustments, no luck. The card was screwed up hardware-wise.

    I borrowed a Pinnacle USB TV adapter from my buddy Ron. This led me on quite the chase. My Pinnacle Studio 10 died like a scurvy dog when trying to access the USB TV device (they are the same company, why can’t they work together?).

    I downloaded the Pinnacle TVCenter app. It would recognize the USB adapter, and would get over-the-air TV if an antenna was connected, but it wouldn’t recognize the baseband video input. I sent a tech support query to Pinnacle. They wrote back a week later that the product was end of life, and they therefore would not support it.

    Note to Pinnacle: that’s the last hardware or software of yours I ever buy, or even try.

    I tried a couple other pieces of software, under both Linux and Windows. I had all kinds of problems getting the sound recognized. Sometimes I would get video but not sound, other times neither. It was all very frustrating.

    One of the programs I tried was Windows Media Encoder. It would show video, but not audio. I looked for an update (there isn’t one), but I saw a reference to Windows Movie Maker (WMM). It was already on the machine, so I tried it out. Got BOTH video and audio captured!

    So I started a capture of the full 8mm tape, and about an hour later I had about a GB of video, in Windows Media Format (WMF). I wanted to split the full recital into clips for each of the kids that were playing. I thought I should be able to define “scenes”, where a scene was one kid, and then save each as an individual clip. So I brought the big file up in WMM, played with it while, and while I could define a clip, WMM wouldn’t save the individual clips as files So I selected the first kid as a clip, and then deleting everything after that. This took 10 minutes. I had 37 kids to do, so that was going to take a lot of time. I tried the same process under an open source tool called Drop Shot, and it was faster, but still took about 7 minutes per clip.

    I took a different route. I ran the tape back, fired up WMM, and for each kid, started a new capture file, started the 8mm in playback, started capturing, stop capturing after the kid was finished playing, and then stopped the camera. I saved each clip, then went back and did it over again. This worked well, and wasn’t terribly hard to do, but it was tedious. WMM needs to remember at least one setting. I wanted to capture in 720×480 mode, but I had to reset this for each clip, which meant five extra clicks for each of the 37 kids. Also, WMM only saves in WMF; not surprising given that it’s a Windows product, and Windows doesn’t like to interoperate.

    So I wanted to convert the WMF files to something more generic, like MPEGs. I used the open source VLC player to do this. It wouldn’t encode to MPEG-4 for some reason. Not only not convert, but VLC died completely. It would work for MPEG-2 files. I tried to figure out how to do the conversion from the command line, but looking over the docs and the man page, it would take a couple hours to figure it out, so I just did it 37 times using the GUI. VLC should remember the last directory/folder name, and should remember the source file name for re-use.

    Each conversion reduced the size of each clip by about 50%, with some loss (fully acceptable) of quality in the video; I couldn’t tell any difference in sound.

    After this, I had 74 clips, one each high quality and one medium quality, for each kid.

    I ginned up a quick web page to allow people to easily download the clip, and fired up the open source FillZilla program to load the files to the St. John’s server. It was a flawless transfer, in spite of it being almost 2 GB of data.

    So I need to learn a bit more about video editing. I plan on capturing the rest of my 8mm camcorder tapes to disk over the next week, and then I will edit them to suit.

    Next time, No Blackberry, *OR* T-Mobile

    30 April 2012

    This photo illustrates why my next phone will not be a Blackberry, and my next carrier will not be T-Mobile:

    The phone on the left is my son’s Samsung Gravity, and the phone on the right is my Blackberry 9780.

    That is my second Blackberry. The main problem that I have with the Blackberry is that the cell reception, in a word, is deficient. You could say that the reception sucks.

    So the situation in the photo is that we were at a restaurant in Jones, OK. Ian’s phone had voice and data service, and my phone had squat.

    Ian actually had access to three networks. My phone could see AT&T, but would not connect to it.

    If I’m in downtown OKC, of course, I have no issue at all. But all over while traveling, the kids phones have service, while Raegan and I (we both have the same Blackberrys) don’t.

    I’ve talked to T-Mobile about this repeatedly. In every email, every phone call, I describe the problem, and in just about every case, it triggers the script: “I understand you are having trouble making phone calls…”, and they want to tell me how to turn the damn thing on. I’ve posted to support forums, post to the Crackberry forums, and several others. There are no solutions. I’ve tried a new SIM, many settings, T-Mo has looked at the phone configuration, reprovisioned it, etc. It’s not just my phone, since Raegan has the same issues. I also had my phone replaced last summer.

    In a lot of cases, one of the other phones would have service (say, 3G), while the Blackberry has one less level (in this case, EDGE).

    Why is this? It could be that the basic Blackberry has a problem with its radio, but I don’t think that is the problem. I have been with other people who have Blackberrys, with other service providers, but they get service just fine. I think the problem is how the Blackberry is connected to the T-Mobile network. It won’t roam very often; like I said before, a lot of times I can see one or more other service providers, but the phone won’t connect to them (it says it does, but no voice or data).

    We’ve been T-Mobile customers since 2005. We had been AT&T customers since we got our first cell phones back in 2002, and when AT&T merged with Cingular, the customers in two markets (OKC and Tulsa) were given to another company (Alltel?) under some term of the merger. That company didn’t do international texting (which Raegan does a lot of), and so we went with T-Mobile.

    T-Mobile has really not been of any help in the four years I’ve had Blackberrys, especially in the past couple years, with the 9780. I like a lot of what the phone does. It’s got good battery life, runs a couple apps that I use regularly, and it’s been fairly reliable. But there are times that the phone connectivity that has driven me wild, like when our house was threatened by a wildfire last summer, and it was having trouble connecting from San Diego, of all places. It’s simply unacceptable.

    I’ve had it.

    Reuse of SATA Drives in New Containers

    21 April 2012

    A buddy of mine donated a couple external drives to St. John’s a couple years ago. They are USB devices, and I’ve been using them as portable backup devices. A couple months ago, both stopped working. As is the usual practice, in each case I did something else to get the data moved as I was in a hurry.

    These were Western Digital model 3107 (120GB) and 3207 (160GB) devices.

    I eventually brought the drives home, and decided to take a look at them and see if I could get them working. I was surprised when I opened them up – they were SATA drives! And they were *both* remanufactured. An outfit called Ontrack Data Recovery Services had opened them up and then rebuilt them. My buddy said they were bought new by his dad, who never bought anything that wasn’t first class. So I doubt they were marked as reconditioned or rebuilt or anything like that.

    So I am kind of surprised by this. The reconditioned drives are both Western Digital also; usually a good brand, but like all mechanical devices they are subject to failure. The USB-to-SATA interfaces from each container work fine, and so like all interfaces they are potentially very useful, and so they go into the tool kit I carry around (I’ve done a couple SATA-to-SATA clones, and they are bitching fast compared to IDE).

    And if I ever buy an external backup drive, I’m going to open the darn thing up right there in the store and check it out.

    OKC Now Has TSA Strip-Search Machines

    11 April 2012

    So TSA has installed the latest step in their campaign to completely strip-search airline passengers, a see-through scanner, at the OKC airport.

    Out of the last six times I have been forced through these ought-to-be-illegal machines, five times I have subjected to groping.

    So the machines clearly don’t work.

    The TSA people that used our tax money to inflict these dehumanizing machines on us ought to be ashamed, and fired.

    The Congress, which should be protecting us from this government “security” apparatus, is cowardly.

    It’s disgusting.

    TSA at SAN – Clueless

    31 March 2012

    I didn’t have the best experience going through security at the SAN commuter terminal today. A little background: SAN has a curfew for takeoffs that ends at 0630. My flight was scheduled to depart at 0620; there were five flights scheduled to depart from the commuter terminal right at 0630.

    Each of the five flights were regional jets, about 50 passengers on average. That’s up to 250 people going through security in the 1.5 hours from when the terminal opens until the flights close about 30 minutes before departure.

    Now, the TSA has two lanes there. One was open; there were over 100 people in line when I checked at at 0500. I note for the record that TSA had the normal number of people staffing that lane (six), and another eight standing around watching.

    TSA was using their horrible backscatter “we MUST see them naked” machines. Now, it takes extra time for people take off all belts, pens, wallets, chapsticks. Then the machine takes 10 seconds or so to get loaded with the traveling public that has had the million-dollar monstrosities forced on us (using our own tax money, I might add). The stupid machine takes 10 seconds to irradiate you, and then at least another 10 seconds for the hidden TSA person to finish staring at the naked body. So that’s at least 30 seconds per person to (1) strip you of your clothes and your rights, and (2) get a single person through the checkpoint (and that doesn’t count if they have to do secondary screening because the damn machine doesn’t work – more on that later). So to get a couple hundred people through the checkpont using the backscatter machine would take more than 1.5 hours, which is impossible since they have roughly an hour to do it.

    So somebody came to their senses (finally), and they opened up the standard magnetometer, and started cranking people through in less than half the time. At that point the line really started moving.

    So I was next in line for the magnetometer, and some TSA guy who is working the backscatter literally interposes himself between me and the magnetometer and pulls me over for the backscatter. Now I had to take off my belt. And take my chapstick out. I managed not to take my wallet out. THAT got me secondary screened.

    Side note to TSA: Your policies are stupid. The NO METAL policy is dumb – they treat a quarter in your pocket like a 9mm pistol. And their magic million-dollar machine is not bright enough to figure out that a Chapstick isn’t a threat (I found that out in Boston last week). Or a wallet. But really, TSA, how many square leather guns do you run across?

    So now that the Invisible Commissar Of The Backscatter Machine has Detected An Unauthorized Wallet, you have to have it Inspected by yet another TSA guy. And that also means that they swab your hands, because TSA apparently Knows For Sure that if you have a wallet that isn’t taken out during the scan, then you, a loyal American with a security clearance since 1984, have probably handled explosives and You Might Be A Threat.

    TSA is, as an agency, stupid. The security situation is NO different than it was pre-911. Except, of course, it is hideously more expensive due to having twice as many TSA people than they had before, and they have spent huge amounts of money on machines that detect that you have an Chapstick in your pocket. And let’s not forget the tens of millions expended on machines that puffed air at you, AFTER you went through the magnetometer, in an attempt to find out if you have had explosives on you recently.

    I’ve read many releases from TSA “Administrator” John Pistole, and he has a deaf ear to the taxpaying traveling public. They need to rethnk, or actually think, through their policies, and abandon the strip-us-naked machines and stick with what works, the magnetometers.

    Memo To Smart TV Manufacturers

    11 March 2012

    We got a new LG 47″ LED LCD smart TV over the holidays. I think it was a great value.

    One cool thing is that the TV has interfaces with an N-wifi USB dongle. The dongle connects through our house router, and from there out to the internet. It can interface to YouTube, Hulu, and the like to place those videos directly to the TV (it uses a pretty cool remote that has accelerometers in it like a Wii, to act like a mouse).

    And that is the basis of my memo. If the TV wifi adapter can be used to get content off of the Internet, why not add a video streaming encoder to let other TVs in the house, and other laptops, act as remote displays for the TV?

    I’m thinking this scenario: I was watching the NBC evening national news earlier, when I went from the den to the kitchen to help cook dinner. It would have been very nice to take my laptop in there, with the big TV streaming to the laptop.

    I could do this myself. The TV has a monitor output. I can route that video to a computer with an HD capture card, and then encode and stream that out over the house network. But it would be easier to just get it direct from the big TV.

    How about it, TV engineer geeks?

    BTW, in an exercise of pointless tech, I airsnorted the YouTube requests from the TV back when we got it, and programmed my wifi router to redirect those requests to my laptop instead of the Internet. I had a video encoder running to stream a file on the hard drive, and the TV dutifully played it. It wouldn’t be difficult to do the same think with a DVD. It doesn’t surprise me that this shouldn’t have worked. Most of the digital TVs seem to be running a version of Linux with MythTV or something very similar.

    Big Oil Propaganda, Doubling Down

    2 February 2012

    I posted a while back about a series of Big Oil ads being used to influence public opinion. I’ve noticed a couple new ones.

    In one, the spokesperson includes a phrase about “Canadian oil sands” in the ad. They also claim that this will produce “one million” American jobs. This is highly doubtful. A study cited on Wikipedia claims up to 5,000 temporary jobs laying pipe and such. The Canadian company that is pushing the pipe claims 20,000 jobs. Congress (Republicans) claims hundreds of thousands. It should be noted that the first Keystone pipeline project used contract people from Canada to do a lot of the work.

    Another ad that I noted last summer involved three college students arguing in front of a professor, who doesn’t say much. The kids end up agreeing, sort of, that natural gas is cleaner and safer and just overall wonderful. I’ve seen two more ads in the same series. One shows ordinary people having the same discussion, and yet another shows some business types having the discussion.

    I wonder why they bother. Big Oil already has the DC polititians in their pockets via lobbying. I wonder if the Occupy movement has Big Oil bothered? I hope so.

    PIPA and SOPA

    19 January 2012

    I’m glad that the Protect Intellectual Property Act (PIPA) has been withdrawn. I hope SOPA meets the same fate, and soon.

    I support protection of IP. I produce IP in this blog. I don’t mind if my views are read and adopted by others, or adapted into other views, but I wouldn’t like it if someone took my views and put their own names on those views.

    But PIPA was the functional equivalent of carpet bombing. It’s way easy to grab stuff off the Internet and use it. Rachel Maddow had four examples of people who support SOPA/PIPA, all US Senators or Representatives, who had images from the web being used for their Twitter feeds or websites. None had permission given, or attribution made. If the staff of a Senator can’t get it right, how would the rest of the country?

    Making sites, like Google, or Facebook, or the like police for IP violations is not reasonable. It’s like expecting AT&T to listen to the content of every phone call, and make a judgement as to whether the content involves illegal activity.

    Better to have targeted investigations of the worst offenders. I think that the issue isn’t things like photos anyway, it’s people selling or sharing music and videos. Let the content owners or their representatives (MPAA, for example) do the legwork, and then get the police involved. Just make sure that due process is followed, and that the punishments fit the offense.

    I try to not be paranoid, but I wonder sometimes if the people who write these laws have an ulterior motive. The “Internet kill switch” I think has appeal to Those In Power. Legislation like SOPA and PIPA might be the slippery slope that gets us headed that direction.

    The Internet has been very liberating in a lot of ways. It is designed to make sharing easy. If sharing is easy, then things will shared. A government should not be in the business of trying to stop the sharing of legal stuff, whether it be data or ideas.

    An Odd Network Observation

    13 December 2011

    I’m doing some work in my hotel room. The room has a hardwired connection. I put the hotel network cable into my plain-vanilla Netgear switch, then ran two more cables to the two computers I am working on. I am using the network to transfer files between the machines. The connection is DHCP.

    So the odd thing is, one machine has the IP address 192.168.6.209. Not unusual, a Class C non-routable address. But the other computer has the IP address of 50.94.39.196. Not a Class C address at all, more like a public IP. WTH?

    So the two machines could not talk to each other. I solved the problem by changing the 50.x machine to a static IP of 192.168.6.210, and now the machines can talk just fine.

    But it is still odd how the two IP addresses were assigned. Both have the same DNS suffix. The gateways assigned are in the same subnet that the IP addresses are in.

    This is the second time I have seen this behavior; last night was the first. I can’t understand how the same router or smart switch assigned IP addresses in two completely different subnets. At least I got the transfers working OK.

    Picasa – Pretty Cool

    18 September 2011

    After the Yosemite backpacking trip, I wanted a way to share all of the photos I took on the trip. I also wanted to be able to have the other guys on the trip be able to upload their photos if they wanted. I was vaguely aware of Picasa, and so I checked it out.

    Turns out Picasa is both a photo sharing site, and an associated app to perform photo manipulation. I downloaded the app, and it automagically spent some time finding and indexing image files on my computer, including the batch of Yosemite photos. I haven’t played with the Picasa app yet. I usually use Paint (either the Windows or Linux versions) or The GIMP when I have to manipulate images.

    I uploaded the Yosemite photos. I created a Picasa web account (and since I already had a Google account, that was pretty straightforward), pointed it at the directory where the pictures were, added a title and some other info, and then the photos uploaded. It was fast and easy.

    Once the pictures were there, it was trivial to enable sharing. I added email addresses for the other five guys, added some geolocation data to show where Yosemite is on the map, and then looked at the presentation. It was pretty simple, medium-sized previews, which could be clicked to bring up larger, or even full-resolution images.

    One thing I had been dreading was captioning, since I had 200+ photos. I have looked at packages that required a lot of keystrokes to caption a picture. Usually the sequence is click the photo, then click a button to caption, type in the caption, then click save or whatever, then go back and repeat.

    Not so. I clicked Actions, then Captions, and got 50 pictures arranged with caption space next to them. The process is such that when adding or changing a caption in a field, moving off the field changes the caption automagically (via Javascript, I would imagine). Since the photos were arranged by time, I got into a rhythm of typing in a general caption for a major section (for example, “Day 2, Hiking.”), and pasting it into picture after picture. So the process was click mouse in next the field, Ctrl-V, repeat, unless I wanted to add some additional text like “Boy, were we dirty!”. So captioning everything took about 20 minutes.

    Another thing that was pretty cool. If there are people in the picture, Picasa does a decent job of identifying faces, and prompts you to name the people when the mouse crosses the face. It’s optional to actually name.

    I noticed that one of the other guys uploaded photos into the album at some point. One thing that I would gripe about, when the album is updated, the people that you have authorized to upload to the album all get notified when someone uploads new photos. The owner of the album apparently does not by default. So I will look and see if there is some option I need to enable for that. Note several hours later: It turns out that I got an email from Picasa, while I was writing this post, that let me know that the photos had been uploaded. So, gripe > /dev/null.

    Overall, Picasa on the web is a pretty cool site. I posted a link to the full site from the blog post I wrote for the Yosemite trip, and I think I can recommend Picasa when you have pictures that you want the world to see.

    Facebook and Linked Accounts

    18 September 2011

    I have had a couple opportunities to “link accounts” in the past couple days. One was with Urban Spoon, I don’t remember the other one.

    So in both cases, the Facebook cookie that is on my hard drive was accessed and read to determine who I was. Then FB was invoked and I was asked to confirm that the site could do a couple things: (1) access my basic information, which includes birthday and such; (2) post things to my Wall; and (3) send me email.

    I declined both. Items 1 and 3 I had no problem with. It was the posting to the Wall I didn’t want. I want the things I post to be from or about me. I have no problem with some things being posted to my Wall; for example, I was tagged in several photos from my Yosemite backpacking trip by other hikers, and that tagging was posted to my Wall since it was about me.

    But what I suspect would be posted by other sites is mainly advertising. I’m not opposed to getting advertising, but I don’t particularly want advertising posted to my FB Wall in my name.

    So to Urban Spoon, or any other sites that want to practice cross-site authentication, I would suggest that you ask for the minimum information first, and then ask the user if they want to allow you to post to their Wall. In the case of Urban Spoon, I’m already giving them free content by linking my blog posts to their site, and not asking anything in return except to access other information on the site. I don’t want them to speak for me by posting to my Facebook Wall.

    Urban Spoon, Me, and Eating

    17 September 2011

    As I say in the About Bill Hensley section of this log, “Food, is good”. I love to eat. One of my occasional bits of wit is that food is better than sex, because you have to eat. It might as well be good (the food, that is).

    A while back, I wrote a blog post for Boulevard Cafe here in OKC. Someone from Urban Spoon noticed the post, and asked me to link it back to the US website. I checked it out, and then added the link, and have noticed traffic from US to that post every once in a while.

    When I was out in Florida this past week, I saw a US logo on a restaurant I walked by. Today, I was sitting around, and decided to check the US site out. It about knocked my socks off, the content was amazing. I decided that from now on, I would link my restaurant reviews to the US restaurant listings.

    One of the things I just noticed on the Urban Spoon site was a map of Oklahoma City Restaurants At Night. This is a super cool map! I really like the feature where if you float the mouse over a city/region, the lights on the map change color to yellow and get a bit bigger. Very, very cool.

    I think that adding my small bit of content to the US site is really what the web is all about – I have some experience, and I add that bit of experience to the electronic consciousness of the rest of the planet, without giving anything up.

    Happy eating!

    Vista and Administrative Stupidity

    10 September 2011

    So here is something dumb. I created a subdirectory (or “folder”) called 5-9 Sep 11 Dallas, to store some expense report documents. I needed to upload them for my expense report.

    As I worked on the report, I used the web-based upload function to get the files to the server, and realized that the actual folder name should have been 6-9 Sep 11 Dallas. So since most versions of Windows allow you to change a file/folder/subdirectory name from a file dialog box, I right-clicked the folder, then clicked rename. It started the name change process, I changed the 6 to a 5, and hit Enter.

    Vista thought about it for a second, then put up the dialog for requesting and authenticating elevated priveledge. WHT, I thought. I tried just clicking OK, that didn’t work. I did it again, and this time selected my CAC authentication, and Vista informed me that wasn’t elevated enough.

    I left the folder name the way it was, then uploaded the files. I then immediately fired up Windows Explorer, navigated to the Documents folder, then performed the folder rename with no problems. So there is a mismatch in the security settings for the machine, or maybe a bug in Vista. Whichever it is, it’s annoying.

    Experimenting with An Archos 7 Home Tablet

    31 July 2011

    I’ve been wanting to play with a tablet for a while. I say “play with” quite deliberately, as I can’t think of a mission reason to have one – yet.

    A friend gave me an Archos 7 Home Tablet about a month ago. I’ve been carrying it around for a while, using it various places, and putting it through its paces.

    This afternoon, Ian and I went to Best Buy and Office Depot, where we played with some other tablets. The last time I went to these places, there was only one – the iPad. Now there are about five at Office Depot, and eight or so at Best Buy (and this didn’t include the readers like the Kindle and it’s equivalents).

    The thing I was struck by – there was only one 7″ tablet, and it was a reader. The rest of them were 10″ or better. These all ran Android 2+ (I think 2.2 or better), they were lighter, had more memory, were faster, had brighter contrast and more brilliant color, more connections, and more apps, than the Archos 7. Amazing hardware. Several were in the $300 class; the Archos retails for $200.

    So as I have used the Archos tablet, I have concluded that it is good for maybe two things for me. First, if I am at a restaurant, or in bed or on the couch in my hotel room, the Archos is OK for surfing the web for news, and for updating and reading Facebook. It might be a good reader, but I didn’t try that. It seems to be good for playing digital music, but it would not run any streaming audio or video (think YouTube, or my favorite online station WDUV). I SSHed into my server at St. John’s a couple times (it was quite a pain to get the SSH client on the device) to check status and fix one issue, and that was OK (I can do that on my Blackberry also).

    This blog post is being written on my HP laptop – that should tell you something right there.

    First of all, the on-screen keyboard sucks. It is not responsive, and not accurate. I had a multi-line status update to Facebook an hour ago, and a slightly errant keypress wiped the tediously typed in entry out. I say tedious, because as I typed on the on-screen keyboard (OSK), it missed a large number of keys, even though it indicated the key was actually taken by showing it as small floater about the OSK. It seems like a lot of the missed keys were right after spaces. The keyboard on the Archos is a resistive-type screen; I used a number of other tablets today, and the have the same OSK layout, but their touchscreen (a capacitive-type, I see from researching online) response was significantly better; they missed no keys, and kept up with fast typing with little backspacing. Much better.

    I had a lot of trouble dragging (scrolling) the display. I finally figured out that instead of using my fingertip, if I flipped my finger over and dragged/scrolled with the tip of my nail, it was fairly accurate.

    One thing the OSK needs is left and right arrows. When I missed a character while typing, trying to put the cursor back on the missing letter led to a lot of cursing. There is a backspace key, but it’s destructive. An arrow key would help a lot, you could go near the error, then click over to it quickly.

    There is no Flash support, which is probably why it will not play video or audio streams. Supposedly the 2+ version of Android does support Flash.

    When I first brought it home, I plugged the charger into the wall, and plugged into the Archos. A couple hours later, it still would not boot. I realized after a couple minutes that the holes for the audio jack and the charger the the same size, and are right next to each other. I had plugged the power into the headphone jack. Probably lucky that I didn’t fry it.

    I’ve had to to companies, restaurants, rest stops, home, and it does a great job of getting on wifi, even in WPA mode. It handles my webmail just fine. While I was in Dallas, Raegan sent me some photos of some award pins I needed to pick up for her at the local Girl Scout office, and being able to show the staff the picture of the pins on the 7″ screen was really nice.

    The wallpaper tool is stupid. I don’t know if it Archos-specific, or part of Android, but it should be able to take a picture that you want to use as a wallpaper, and size it to fit the screen. Even Windows can do that, and Linux has been able to forever.

    I tried to download some apps, and it was a PITA. I don’t know if that is Archos problem, or Google (Google, which developed Android, also runs the Android apps store). The first problem is that the appslib program complained it was out of date. To get a new appslib program, you had to get on the AppsLib site, which required you to create an account, and then wait 24 hours (WTH?). I waited (I tried to get on there every couple hours to no avail, it takes MORE than 24 hours), then downloaded the appslib program, which would not install. After some searching around online, I found the tidbit that you had to uninstall the existing appslib FIRST. I did that, and the new one installed immediately, and then showed me what I really had on the Archos.

    I tried to download a couple other apps, but between the lame AppsLib interface, and the incessant complaining that I don’t have Android 2+, and other errors about file permissions and such, I just gave up. There is a program loading mode that uses a laptop or desktop to download the app, then load it into the Archos via USB, but I only did one app that way, and it’s even kind of a pain. It should just work.

    I would like to use the Archos as a car computer, with a moving map. I still haven’t been able to find apps that will load for Android 1.5, like a mapping program, in the AppsLib. I haven’t been able to make it recognize and attached USB GPS (and it does not have built-in GPS). It will not tether to my Blackberry for data (another Android 1 limitation), and does not have Bluetooth.

    It did a good job playing music from my Blackberr SD card. I pulled the card, popped it in the Archos, it indexed stuff for about 30 seconds, and then I was playing music immediately, over either my headphone, or the built-in speakers. Pretty nice.

    The Archos is an amazing technology, though. I was able to use my company timekeeping website with little problem (that thing is a web disaster, so that was pretty amazing), and things like Facebook, with a lot of stuff going on in the background, worked well.

    I will have a tablet eventually, but right now I don’t think that one will help out too much. I think they can be a lightweight bridge between a Blackberry and a laptop, but that is a niche that I don’t really need filled right now. I plan on keeping an eye on the market, and if the prices drop a bit, I might buy one of the 10″ devices.

    I also have tried to find out how (if?) the Archos can be upgraded to Android 2. We’ll see how that goes.

    Word 2007 Copy and Paste Weirdness

    20 July 2011

    I have to use MS Word as part of my job. I am currently building a substantial document from three other documents. One of the major parts of this is to grab text and JPEG images from one of the documents (which is another document created with Word 2007) and paste it into the right place on the document I am working on. I then have to edit the document.

    So there is a lot of extra work here. I can not make Word keep the formatting right. Both documents are in Arial. The source is 11 pt and the destination is 12 pt. Given that, Word insists on changing the font when the paste is done to Times New Roman. Yes, I know about the option to keep source or destination formatting. Word, it seems, doesn’t. Word also will randomly place the copied text in front of or under the numbers that are started in the destination.

    All of this can be overcome with more editing. But here is a weird one. Check out this document (it’s a screen capture):

    So notice that the right half of the page is blank; the Cloak of Word Invisibility goes right over two pictures that are embedded in the document. Word knows they are there (from the image outline), but it won’t tell me at all what is covering up half the page. I can’t delete it. This happened because I pasted a single sentence from one document into this one. The original document doesn’t have a huge blocker there.

    So Word, after being around since about 1993 or something like this, is still very buggy where basic functionality is concerned. I wish it was not. I opened this document in OpenOffice 3.1, and it does not have the big block. I closed the document and reopened it up in Word again, still there. What a pain.

    01 Aug 2011 update: check the second comment on this post for a fix.

    A Slightly Spooky Facebook Feature: “Favorite Places”

    24 June 2011

    OK, so I am on Facebook too much, probably. I think it’s a pretty cool application. Sometimes I marvel at the ability of the app to correlate stuff, like the interests of people.

    But a couple days ago, I noticed something that kind of freaked me out and fascinated me at the same time. On a page that displayed the status of one of my friends, and the comments to that status, I saw this on the top right part of the page:

    Those are two restaurants. One, Lido, is a chinese american fusion restaurant in Oklahoma City. The other, Hunter Steakhouse, is in San Diego. I thought it an amusing thing that I had been to *both* restaurants. After a bit, I came back to another status page, and there were two more restaurants (in Omaha, and OKC), both of which I had been to. Later, another two restaurants (Alexandria, VA, and OKC); been to. Hmmm, I thought, I’m seeing a trend! I started clicking the “Next” link. There were probably 15 pairs of restaurants.

    So the amazing thing here is that every one of the restaurants is in my blog.

    It’s not surprising that Facebook knows about my blog; it’s in my profile as my website. The really amazing thing is that somehow, Facebook went to my blog, and found the restaurants, and identified them as restaurants, even though there are no tags that consistently define them as restaurants. A typical blog entry title is “Railhead Diner, Purcell, OK”. You would have to have some serious correlation software and a great database to see that my mention of Railhead Diner in Purcell is in fact a restaurant. The work “Diner” in the title would help, but what about an entry like “Lo Sole Mio, Omaha, NE”?

    So this is, to me, a slightly spooky Facebook feature. I don’t find it an invasion of privacy or anything like that; after all, the blog is 100% in the public domain. It does make me think about the vast resources Facebook *has* to have to be performing correlation like this; it’s very intelligent.

    There is a part of me that would like to work at Facebook or Google and work on some of these kinds of amazing software applications.

    I Learned Something New About Screen Captures

    19 June 2011

    I learned something new about Windows XP just now. I do a lot of screen captures (what used to be known as a screen print) to get map data to a place where I can edit it with notations, or to crop it. Most of the time, I do the screen capture (it’s Ctrl-Print Scrn on most keyboards).

    Right now I am running with my laptop as the primary display, and a flat panel as my secondary display. I’m looking at stuff on the primary, and annotating a captured map on the other.

    I accidentally did a Shift-Print Scrn, and when I did the past, I got the capture from the secondary display. That’s pretty cool. I verified the behavior a couple times.

    So add that one to the list of keyboard shortcuts. I know it’s probably documented somewhere, but it’s cool to find it out inadvertently.

    There Are Times I *HATE* Outlook and Exchange

    1 June 2011

    I just did a speed test from my house. My bandwidth is over 3Mbps. I can ping the server a hundred times with no packets lost and low latency. Outlook reports “Connected to Microsoft Exchange”. I can open any of the messages in the Inbox (well, most of the time; the screen still “grays out” every once in a while).

    But try to forward a message, or open a simple calendar entry, and time and time and time and time again:

    “Outlook is requesting data from the server”. OR,

    “Outlook must be connected to the server to complete this action”.

    This has been going on for half an hour now. Why can’t a big-time outfit like Microsoft get this right. I’ve said before, I have this problem even on the “corporate” network, and I have it at home, and at hotels, and other companies. Wired and wireless.

    And yet all the other stuff (browsing, ftp, anything) works fine.

    Outlook and Exchange act like they are still in beta test sometimes. Geeez.

    Goddard Visitor Center and National Wildlife Visitor Center, MD

    14 May 2011

    Thursday, I had some time on my hands after meetings, and these two places were conveniently on the way to BWI, so I stopped at each to visit.

    A place like Goddard really brings out the nerd in me!

    This was my third try at visiting Goddard Space Flight Center. The first time, about a year ago, I got there at 1510, but they close at 1500. The second time was Monday when I arrived in the area; I got there at 1230, but they are closed Monday and Tuesday. I got there at 1245 Thursday. The neatest part was the Rocket Garden outside.

    This one shows some of the buildings on the main Goddard campus, and the myriad of cool antennas.

    Inside, there were some really cool exhibits. My favorite was a set of images of the Sun showing various activity, such as promenences and flares. I really liked this one also; it shows the Sun in X-Ray, and the activity difference between the solar minimum and maximum.

    This is looking at the “business end” of a plasma generator.

    They also have a mockup of a Gemini capsule. It’s the same basic shape as a Mercury capsule, but the dimensions are slightly larger. There are two seats in there, and if you sit in them, you can briefly imagine what it was like to sit in there for up to two weeks! I have always admired the courage of the men who volunteered to squeeze in a small space like that, on top several thousand tons of highly explosive material, and have it LIT.

    There is a decent gift show. My favorite things there were license plate frames for the employees. One had “Yes, I AM A Rocket Scientist”, and another “186,000 MPS… It’s The Law!”. I think I would like working with those guys.

    From Goddard, I went to the National Wildlife Visitor Center. I’ve passed the place a million times on the BW Parkway. Turns out the real name of the place is the Paxutent Research Refuge National Wildlife Visitor Center. There is a nice interpretive exhibit that looks like it was created in the 80s. That’s not a slam; there was interesting information in there.

    There are also hiking trails in the refuge. Turns out that the Visitor Center is on the southern, smaller tract. The north, and much larger tract, buts right up against Fort George Meade. The north tract has many more miles of hiking, and I will walk some of those on a future trip (one note, a minor gripe, I haven’t found ANY information on the trails here anywhere on line. I wonder why).

    I headed out on the trail on the Loop Trail, then vectored to the Cash Lake trail, taking several spurs to lookout points.

    I saw this and thought it was a viewing tower. Turns out it is a nesting bird tower.

    The trail is pretty.

    I didn’t go fast on the trail. It was about 75F, no wind, and very pretty. The only downer is an almost constant low-frequency traffic noise from the east. There was also someone persistently shooting from the west. I crossed a floating bridge and had a flyby from two herons. Around the corner, I found one of them and snuck up on it just a bit.

    You walk over the dam for the lake, and loop back the other side. Eventually, I came to the Valley Trail. This odd tree growth was on a tree there.

    From the Valley Trail I came back to the Visitor Center via the Laurel Trail. I didn’t bring my GPS, but the mileage according to the map is around two miles.

    I didn’t see all that much wildlife. There were some geese. One highlight – I saw a Baltimore Oriole! I have wanted to see one in the wild ever since I wrote a report on them for Mrs. Allen in my 4th grade class at Whittier Elementary in Muskogee. It was a beautiful bird that sat on a branch and looked at me from about 20 feet away for about 15 seconds (too short for me to get my camera out, on, and aimed). I also saw about 500 tadpoles in a pond, one toad on the trail, and a skink. There was also a freshly-killed black garter-type snake; I think it was on the road/trail between Cash and Redington Lakes, and was run over by an employee who drove down that road while I was on the road. The guy probably didn’t see the snake, it was only about 9″ long.

    This is a map of the south tract trail system.

    From here I headed to BWI. I got there about 3.5 hours early, and found out that my flight to MSP was already an hour late arriving, which meant I would have ended up spending the night somewhere there. I was early enough that I changed my flight to come home via ATL, which meant that I got to fly on a B737 and MD88 instead of a pair of RJs, and it got me in to OKC an hour before my original arrival time to boot. So that worked out OK.

    Windows Vista

    24 March 2011

    Vista can be very frustrating to me. I have it on the laptop that the Air Force issued me to work with. Internet Explorer has bouts of being Internet Exploder, just hanging up and dying. IE (version 7 in this case) just cannot display Facebook pages correctly, and it complains incessently about web pages, by displaying a little “error” indication in the lower left of the window.

    I’ve complained before about the picky instability of Outlook with Exchange.

    Tonight, the video driver on the computer died and BSODed the machine. Twice. I suspect it is related to the fact that I am using two monitors with the machine (which, BTW support for which is built in). This video driver issue happens every couple weeks. One of the times it happened with a Microsoft Word 2007 document open, and that caused Word all kinds of heartache.

    I finally finished what I was working on, and needed to save it to CD. Well, every version of Windows up to this one, you just drag the files to the CD/DVD drive, and it wrote them out fairly quickly. Not Vista, first, it has to format the drive. Except that it complained that the CD/DVD drive was under the control of another program. A little examination of the taskbar reminded me that Media Player was running.

    Now, Media Player wasn’t doing anything remotely needing the CD drive; it was playing a TV show that I was streaming from the den of the house. But apparently, Media Player wants Total Control Over All Your Computing Assets. I shut it down and now I could access the drive, but it still wouldn’t format the disk, and shortly thereafter the second video driver BSOD forced a computer reboot. That cleared everything up. I just now got the damn data onto the disk, 30 minutes after I started.

    I am continually frustrated by Microsoft. It took them a couple times to get XP right, and I would rather run it than Vista (or even W7, although I admit I have little experience with it).

    The Nuclear Crisis in Japan

    16 March 2011

    I am, as I imagine a great many people are, concerned at the nuclear crisis in the wake of the earthquake and tsunami this week.

    I have always supported the idea of nuclear power, and conversion of many of our transportation assets to electricity and battery-powered vehicles. The events in Japan are shaking the confidence I have in the industry, since we have many reactors of similar design in the US.

    Ambassador Rich (I don’t know ambassador of what) was just talking on the CNN Piers Morgan show about new reactor designs for soon-to-be-deployed reactors that use convective concepts to cool a reactor even if the reactor is completely unattended by humans. I do not know what he is talking about, but hope that he is speaking truth.

    I would also wonder if the cooling technology could be engineered into the most at-risk reactors, such as Diablo Canyon here in the States.

    A Blackberry Fail

    27 February 2011

    I got a new Blackberry 9780 back over the holidays, and like it a lot. I’ve noted a couple small issues with it. For one, when reading longish emails, it has a tendency to move to the top of the long message when I am trying to move down. That’s a bit annoying, but manageable. It performs very well on websites.

    Yesterday morning, I was trying to make a post on Facebook, using the FB Blackberry app. For some reason, it would crash the entire app when I was entering data into a text box, just as I got to the end of the text box and it was about to wrap the text. This was a repeatable bug. I got the entry in after four tries, by typing the entry into a dummy email, then doing a copy, and pasting it into the text box.

    An hour or so later, I picked up the Blackberry to head out the door, and noticed that it was dead. It had crashed at some point in the past hour. I booted it up, and got an error message (after a long time, about five minutes), App Error 603. That started a series of reboots. Each reboot ended with a lot of Java errors and exceptions. The time from boot to being able to do anything was about 15 min. My wallpaper was gone, replaced with the default wallpaper. And when I looked, I noticed that the Blackberry browser icon was gone.

    I did a lot of research quickly. The BB would not connect to the BB Desktop software. There was little online guidance. I finally found a page that talked about using the BB Update tool in standalone mode. I ran it several times, and tried to do a backup, but it never connected. Finally, based on a webpage I found, I ran the Update tool without doing a backup.

    It took about 20 minutes to load the software back onto the Blackberry. The phone booted up *much* faster – about 90 seconds. I had to turn the radio back on, but I could make calls, and a bunch of emails came in. So I could communicate. It didn’t do anything to the pictures or music, since those were on the phone’s SD card.

    I just completed a restore from my most recent backup. I have most everything back except the emails that came in since then, but those are all still on the three email account servers. So it looks like everything is pretty much back to normal.

    I should not be surprised this happened. These Blackberry phone are handheld computers, after all, and they crash. The less-smart phones have their code stored in a non-writable memory, I guess. So while it was a bit frustrating, I solved one part of it by storing my backup file on my laptop, so now I can completely restore the phone while mobile, like I was yesterday.

    President Obama and Ubiquitous Networking

    13 February 2011

    There were two articles I found of particular interest this past week:

    Obama to unveil wireless Internet plan

    Wireless advances could mean no more cell towers

    President Obama wants to get 98% of Americans access to high speed network technology. There is no real technical limitation to making this happen. The second article is one way to do it, using micro-cell technology. The cells can be “strung” over long distances as relays, or you can use satellite, or even tall cells towers way out in the sticks. There have even been pie-in-the-sky concepts for long-endurance aircraft that would function like mini-satellites, at maybe 100Kft altitude (lots less propagation delay).

    There are the usual whiners about the President’s proposal. One objection is that it doesn’t do any good to get internet service to poor people – how can they afford it?

    That is what having the government do this makes it work. If you depended on “market force”, there would be little incentive for business to run high speed network to people that might not be able to afford it. If the government makes it happen, and makes the network a true, ubiquitous, network, then there are more people to use it, which amortizes the cost down. You can already get network-enabled computers for free (for example, you don’t have to have a Blackberry or a laptop to access the net, there are many, many smart phones that are essentially computers, or use of a tablet computer).

    I think that true, widespread, ubiquitous network access is the wave of the future. Location-based services in particular, with messaging not too far behind. This can be a game-changer for the country as a whole.

    I Finally Have An Upgraded Disk

    6 February 2011

    My trusty desktop has an 80GB disk that I upgraded a couple years ago from a 20GB disk. Now, between a lot of downloads, photos, videos, etc. that disk was about 97% full. It is time for another upgrade. My computer has two disks in it: the 80GB XP disk, and another 80GB that has a 20GB Fedora 12 installation (soon to be Fedora 13), and the rest of the disk is shared space (formatted FAT32) for DVR functionality.

    I have a 500GB drive that was recovered from a failed video recorder. I popped the drive into an external carrier and looked at it; it was formatted NTFS and had Windows 2000 Professional on it. I zorched the partition, took it from the external carrier, and installed it into my desktop as an IDE slave. It was recognized by the BIOS, looking good so far.

    I started the machine up with System Rescue CD. It found the disks, but when I did an fdisk -l, it found a bunch of RAID stuff as well (RAID == Redundant Array of Inexpensive Disks, a methodology of automatic backups of data, for databases and the like). NBD, I thought, I’m about to wipe the drive anyway.

    My usual process here is to clone a known good drive to the new drive, then grow that partition to fit the new, presumably larger drive.

    So I started the cloning process using the trusty dd command. My old XP partition was copied over to the “new” 500GB drive after a bit. I moved the IDE connections around to make the 500GB drive the master and rebooted the computer. XP came up, Linux came up, all looked good so far.

    I rebooted into System Rescue CD, and fired up Gparted (Gnu Partition Editor) to grow the XP partition to the full 500GB size. Here is where I ran into a problem. The partition grow process seemed to work OK, but when gparted did the final check, it reported that another process had locked access to the 500GB disk. WTH?

    Doing some rooting around in the log files showed that Linux thought that the 500GB disk was still a RAID, and it had started RAID services on the disk. These services in turn had complained during boot up that the RAID was screwed up. I know little about RAID administration, so I was off to what I soon found was a confusing and limited set of information about RAID management.

    Many things were tried, and none worked. I inquired of Hitachi, the manufacturer, about the possibility of a low-level reformat; they recommended against it. The dd cloning operation was re-performed about five times over a couple days.

    Finally, I found a description of a similar problem online. The dmraid command was the answer. I did a dmraid -r, it listed the RAID information it thought it knew about. Next, I did a dmraid -rE command to zorch that “RAID”. The next dmraid -r showed no RAID information. I jumped to gparted, grew the partition, and got no errors reported.

    Finally, I booted both Linux (OK), and Windows (also OK). Windows complained about the disk partition being “dirty”, did some chkdsk and other checks, and finally settled down and started working. The usage has gone from 97% to 14.5%, so I have a while to go before another upgrade is needed.

    I am still a bit mystified as to how Linux detected that the 500GB disk had been part of a RAID. The disk had Windows 2000 on it. Overwriting the W2K with the XP partition should have gone a long ways to de-RAIDing the disk. I can only surmise that there was RAID information on the disk past the 80GB point, and Linux picked up on it during the boot process. Hitachi said that there was no RAID markers in the Master Boot Record (MBR). Gparted did not show that there were any RAID flags set either. Maybe I can go figure that out later, in my “spare” time.

    A New Blackberry: The 9780

    6 January 2011

    I have had four cell phones since I got my first one in 2003. Now I have my fifth.

    Raegan has had a Blackberry since we got her a Pearl about four years ago. It has been having a lot of problems over the past six months or so, and she needed to replace it. Erin also needed her first phone since she is starting to do more activities away from one of us.

    So over the holiday we decided to go into T-Mobile and just look. Just… look… Right.

    We walked in with three phones, and walked out with seven. Holy crap. Best of intentions.

    It turns out that Erin got a nice phone with a full keyboard. It does talk and texting, and so meets our requirements. It’s a Glider 3.

    Raegan used and fell for a Blackberry 9780. We found out that both her phone and Erins were both on a T-Mobile special that was essentially two-for-one. So I also got a 9780, and Ian replaced his $26 Wal-Mart phone with the same phone that Erin got, although in a different color.

    The kids phones are pretty cool. The Glider is a slider phone. It has a camera (we had extensive discussions with the kids about appropriate use of that), and takes a miniature SD card (which we are taking the 2GB cards that came with our Blackberrys and putting them into the kids phones). Erin has been taking pictures of almost anything, and Ian has been playing with the configurations of his incessantly.

    Those Blackberry 9780s are amazing. The display is huge and very bright. The phone has 3G service. We have data plans for both our Blackberrys, and we pay for 3G whether we use it or not, so getting a free phone and automatically getting that extra bandwidth is well worth it. It does either the mobile network or a local wifi, like my 8220, so I like that. At a lot of places, I get two more bars of service with the 9780 over what I showed with the 8220, so the radio in it is better. I used to get between 40Kbps-80Kbps of bandwidth with my 8220 (on average, about 50Kbps). I’ve been running bandwidth tests all over OKC, and I get between 300Kbps and 1.4Mbps. An amazing difference. Slow loading websites FLY! The thumbwheel I used with my first Blackberry 7105t was vastly improved by the trackball in the 8220. That trackball got crudded up often, and had to be cleaned out. The 9780 uses a thumbpad that seems to only have one moving direction – a button to click. It’s pretty darn sensitive. The 8220 was very comfortable in my hand, and I could use it one handed. The 9780 is quite wider, and I’m having trouble getting used to one-handed operation. The camera in the 9780 is amazing, it’s a 5MP camera. The images are sharp.

    The 9780 has a GPS in it. It uses the same cell tower technology to maintain an imprecise fix on your location, even indoors, and can use as few as three satellites to lock in your position. I downloaded Google Maps to replace/supplement Blackberry Maps.

    There are a bunch of apps on the Blackberry that I have no clue about. There are some that I want to go look at, but I have had trouble connecting to Blackberry App World.

    I was sort of sad to stop using my 8220. I was really happy with it. The only thing that I really miss about it is the large keyboard. The 9780 has a full keyboard, and it’s a bit small and takes some getting use to. But I think that the speed of the network access will help me get beyond that.

    I have found one bug. I was exchanging SMS text messages with my friend Gayle, and I entered this message: “*I* get a lifetime pass!”. The message failed to transmit. I reentered it, and got the same failure to transmit. I played with the message by removing the two splats (now it was “I get…”), and it worked. Next, I removed the leading splat (“I* get”); that transmitted. Finally, I did the last permutation (“*I get”), and that failed. So, the SMS does not like a leading asterisk, or splat. Dunno why.

    These Blackberrys are, in a word, amazing. They are a true handheld computer. I will be doing more blogging on the device. There has only been one real problem with the 9780. The “w” key was very sticky, but I have been working it pretty hard, and it is almost easily usable now.

    I’ve already used the 9780 several times in non-wifi areas as a tethered data source for my laptop. I have to have the Blackberry Desktop Manager installed, but that’s easy enough. This tethered arrangement provides more than enough bandwidth to run most any internet application.

    I’m looking forward to getting a lot of good mobile computing use out of the Blackberry 9780.

    Spam And The St. John’s Server

    23 December 2010

    I am the IT Department for St. John’s Episcopal School and Church, and as such have to keep the network, servers, and workstations running. One the biggest problems I have had in the past couple years is spam. Not the canned variety, but email spam.

    Spam has been a running problem for a number of years, but was generally manageable. Starting around the beginning of 2009, the amount and virility of the spam got much worse. I was getting a lot of complaints from my users, especially the science teacher in the building, who was very persistent in her complaints (usually, I heard about this at night in bed. About the SPAM, that is!).

    I started looking for some solutions, and after a bit settled on using SpamAssassin. It had good reviews and was free for a not-for-profit. I installed it towards the end of school (to minimize user impact if something went wrong), and spent some time each evening for a couple weeks having spams that my users found run back through the SpamAssassin tool. After a bit, the spam load started trending down (see the chart below).

    I don’t know why this was. I didn’t have a way to automagically collect metrics on the spam coming in, but my gut told me that it was in the 150-200 per day range, which is about what the chart shows. My guess is that as I started automatically rejecting spam at the server, the spam was not able to call home (a lot of it had embedded single-pixel loads from a remote server, intended as a return receipt from our IP address).

    Whatever the reason, spam dropped a lot at the user level. I rarely got complaints about it, and what was getting through was getting manually killed by the users. For a couple months, I put in a couple hours ever couple days running the spam though manual analysis, tightening the reject criteria.

    This happy situation persisted for more than a year. Daily spam was in the 100 or less range, and the users rarely got any.

    Then, over the summer of 2010, my resident science teacher started getting a LOT of spam. You can see that spike, up in the 800-1000 range. It dropped off a little bit, but she was getting a significant amount of spam that SpamAssassin was not catching.

    My analysis showed that most of it was coming in as if it was being sent from our own domain. I had whitelisted all school- and church-related email addresses I could get, including the teachers, of course, and that made SpamAssassin think that the stuff was legitimate. I did some more tuning of the filter criteria, but we were clearly in a world of hurt.

    I was reading one of my daily set of admin/security emails, and the spam blocking service Spamhaus was mentioned. I went and checked it out. Spamhaus maintains a set of know spammers domains and IP addresses. They will feed this to you, and some institutions like schools and churches could use it free if you were below a certain volume of traffic.

    I went back and did some grepping in my mail logs, and found that I was getting connections on the order of 1% of the maximum Spamhaus criteria. The reviews of the service were good also.

    So one evening in early November, I made the changes to my Sendmail MC file, rebuilt it, and then sent myself a batch of emails to make sure that I had not broken the server. While I was doing this, I checked the log file to make sure the messages had come in OK, and there was… a log entry from the Spamhaus service. It had zorched an attempt to connect and deliver some spam, it seems.

    Well, that was pretty cool, I thought. I headed home, and checked again when I got there. The number of blocks was up to about 50! I asked people in the school and church to tell me if they got spam, or more importantly, if they didn’t get an email from someone who had sent it.

    As the weeks went by, the number of blocks was steadily increasing. I decided to do a bit of analysis. First, I made a script that traversed a set of files I had set up for each user to dump spam into. I had to tweak the grep commands a couple times, but ended up with a file that had one line in it for each spam, including the date.

    I then wrote another script that pulled the same data out of my maillog file; that was easy. So now I had about 10MB of data.

    Next, I wrote a short (~100 lines) Visual Basic for Applications (VBA) app that made that data graphical. I used VBA since I wanted to use Excel to plot the data, and VBA was built into Excel, and it handles flat data files well. I had some issues with getting Excel out of compatibility mode, since I needed a spreadsheet that was about 1000 cells across. The VBA app got the earliest and latest dates so I could get the width of the spreadsheet right, then it read each line, and for the date extracted from each record, it incremented the cell value. There were two lines of data – one for SpamAssassin actions, and one for Spamhaus Zen actions.

    The data were remarkable:

    The data showed that the Spamhaus Zen blocker essentially killed off most of the spike in spam that started last summer. It took the level down to the previous level, and once again my users are not complaining of spam. So the Spamhaus Zen function blocks connections to the St. John’s server at the attempt to connect to the sendmail. This saves CPU cycles since sendmail doesn’t have to process as many messages through SpamAssassin.

    One thing I need to add to my data collection is how many actual messages make it to the users. That is a project for the next couple weeks.

    This was interesting project in several ways. I had not done any VBA programming. It was EASY. I have a lot of lines of code of Visual Basic under my belt, so the syntax was already there. I had to do a bit of research to get the specific Excel commands, but they were not too bad. The resulting spreadsheet was about 500 cells wide, and had three rows of data in it, and that spreadsheet caused my rather beefy laptop to get really, really slow.

    I used VBA since I knew that Excel made nice graphs, and that’s what I had on my laptop. I don’t know if OpenOffice Calc does the same or similar. I want to find out what kind of scripting (if any) OO has. I also want to have a look at tools like GnuPlot (I know this exists, I just don’t know if it will work for this) to see if I can have the server generate this automagically every week or so.

    Always a new skill!

    Net Neutrality

    22 December 2010

    The FCC promulgated new regulations pertaining to Net Neutrality today. As with most compromises, few people were totally happy.

    I believe that the use of networks should be neutral for anybody, from an access standpoint. I don’t have a problem with “last mile” providers, as in traditional ISPs, from charging varying amounts for bandwidth provided.

    A typical example of this might be St. John’s paying Cox Cable $120/month for 20Mbps up and down, and yet charging me $30/month for 6Mbps down and 2Mbps up. To really overuse a cliche, the number of lanes of your on-ramp to the Information Superhighway drives the amount you pay for it.

    But once your data is flowing back and forth, it should compete equally with all the other traffic. I think that the concept of a business getting money from third parties for a particular protocol or data type on the backbone to shove other packets aside is (1) not very fair, and (2) is not terribly democratic. You would start running into situations where people running VoIP might get crappy signal because someone else paid money to get priority for streaming advertisements.

    I see the internet as one of the few remaining bastions for equality. There are far too many corporate voices in the media already, exerting political and editorial control over viewpoints. The internet can help level that out somewhat.

    So the FCC did the right thing by demanding and requiring network neutrality. They need to go farther and place the same restrictions on wireless carriers (of course, unfair competition from other wireless carriers using anothers infrastructure can be prohibited).

    But the public should not be crowded out by corporate money on the internet.

    Some Dual-Boot Weirdness, XP and Linux

    19 December 2010

    My very cute and computationally intensive roommate has a fairly beefy Dell dual-core machine. She does a lot of graphics work, and collects music and videos. Her machine had a 160GB drive and 1GB of memory, and we determined that an upgrade was in order. She also wanted a Linux distribution to get some experience with.

    I did a short amount of searching, found that prices for the components I needed were pretty much the same, and chose a 1.5TB Seagate drive, and four 1GB DDR2 sticks from TigerDirect. Total price, about $160 including shipping. I was amazed at the low price. The stuff arrived at the house a day earlier than the four-day shipping promised.

    So yesterday, I started off by installing the memory sticks first (and got the 32-bit Windows limitation of only showing 3.7GB, in spite of the BIOS showing all 4GB. Really, you would think a big-time outfit like Microsoft would fix that).

    Next, I installed the 1.5TB disk into the chassis as the first SATA drive. Dell recognized it immediately.

    I booted the computer with my trusty System Rescue CD v 1.6. It started up just fine. At the command prompt I did an fdisk -l and it showed both disks. Here I made two small errors. I executed my favorite cloning command dd bs=256 if=/dev/sdb2 of=/dev/sda. First error, there were a couple useless partitions on the 160GB disk, and I just wanted the XP partition, so I thought to copy it over and grow it to the full size later. Second error – a “bs”, or blocksize of 256 bytes, so it made for the least efficient transfer. This second error meant it took about four hours to do the cloning operation, and the first error meant that the clone would fail. Fortunately, because I had not changed the original disk, the errors were no-impact.

    The second time I did it right. I executed dd bs=256K /dev/sdb /dev/sda. This time the 160MB cloning took about 40 minutes. One thing – when I clone drives at school, it usually takes about 40 minutes to clone a 40GB disk using IDE. The two SATA drives have rated throughput of 1.5GBps and 3.0Gbps, and that speed shows!

    Once I got the disk cloned, I booted the 1.5TB disk and watched Windows thrash around a couple minutes to deal with the new disk and the moved disk. Then, I rebooted into System Rescue CD, and used gparted to (1) delete the two useless Dell partitions (getting back another 3.5GB of disk), and then grew the XP partition forward and backward to make it about 1.4TB. I rebooted the computer again, XP came up, and disk usage had gone from about 90% to 10%. Very cool. I’m running the 160GB disk as a secondary disk for a while, then I will go clean XP off it, and use it as a backup disk.

    So I started the second part, putting Linux on. When I grew the XP partition, I created two other partitions at the end of the disk, one 5GB partition formatted as FAT32 (which both Linux and XP understand), named “Shared”, and one 50GB partition named “Linux”. I did this out of habit. XP doesn’t understand any disk format except for Microsoft-developed stuff. Linux has understood NTFS as read-only for a while, and for read-write more recently. I have usually dealt with this by creating couple GB of partition formatted as FAT32, mainly so that if I needed to share a file between Windows and Linux, then using FAT32 is a common format both understand. I need to get out of that habit, and start just having Linux mount the Windows partition every time, using ntfs-3g or whatever the distro supports.

    I had given some thought as to the Linux distro to install. I am partial to Fedora, which I use for the wide variety of tools, the fact that it is the same as I use for the school server, and it has multiple software development environments. Raegan needed much less – editing (OpenOffice), graphics (The GIMP), Internet (Firefox and Opera), and media (audio and video). Given that, and the fact that I use Ubuntu on several of the student computers at school, and the fact that Fedora has to be hand-configured with a lot of video and audio tools (that I rarely use), I decided to get her Ubuntu.

    I downloaded it on her new disk, and burned the CD, and then booted the computer from the CD, and… major problem. Ubuntu seemed to hang for a looooong time, then I got an error “Ubi-language crashed” or something like that. It was consistent across several tries at loading. I looked the error up on Google and got very few references to what caused it. So after thinking about troubleshooting versus a known good route, I said the heck with Ubuntu and went to Fedora 13. [Quick update later this afternoon: I popped the Ubuntu 10.10 CD into two of my machines, and it booted all the way up just fine. One is a Dell Dimension 4600, and the other is my cranky HP 6930p. I say cranky because it needs a particular driver for both XP and Vista and W7 installs, and even a special parameter for a Fedora install. But it ran Ubuntu just fine.]

    I pulled my F13 live CD (which I knew had an install-to-disk function) and fired up her computer with it. It started just fine, and so I told it to install to disk. I had told gparted to format the partition intended for Linux as an ext3 filesystem, and F13 found it just fine when I told it to select existing Linux partitions and use it for the installation.

    The installation went very quickly, and eventually it asked me about the bootloader. It detected the XP bootloader on the first partition (which is called /dev/sda1). I used the editor to rename the XP description from “Other” to “Windows XP”, and changed the Linux description from “Fedora” to “Fedora 13”). Looking good so far.

    The next time I rebooted, there was a bit of a delay, but no Grub boot screen. I seemed to remember that when I did a F12 installation at one point, that the Grub had been set to not show a menu (why, I don’t know, that seems stupid for a multi-boot computer). I tried to force my way into the boot menu by hitting the space bar during boot a number of times, but it only annoyed Windows as it booted each time.

    I did some research and found away to boot using Linux rescue mode from the full Fedora 13 DVD. This quickly showed me that the Grub menu had indeed been set to not display. I changed that using “vi” (I had to dredge up the editing commands from a memory long ago and far away – I am not a vi fan) by commenting out the line that said to not display the menu, and then changed the time before starting the default from 5 seconds to 15 seconds.

    Restarted the machine, and damned if XP didn’t come up again! At this point, it was about 2300, and I said the heck with it and went and did other things.

    My general feeling is that it was just too hard to install Linux (rather, it was easy to install Linux, but making it work with XP is too hard)! I have thousands of OS installs under my belt, including hundreds of Linux installs. When I did my first dual-boot installation (I think it was Windows 2000 and Fedora Core 2), the FC2 built the dual-boot configuration automatically. The last couple duals I have done required me to use System Rescue CD to fix things, something that is easy for me but impossible for 99% of people. That is not a good thing for Linux.

    I’ve found some rather detailed things to try (including changing the XP boot loader to find the Linux install), but that is low-priority to other things I’m doing around the house, so it will be a while before I get Raegan up on Linux also.