August 2, 2016

August 1, 2016 – 7:28 am

Sad day for me.  After nearly 30 years of technology, saving files, programs, pictures, and so much more – ended up losing everything in a large data loss incident.  Over the years, storage needs have grown and grown.  I have moved from servers with physical disks that were shared via NFS/CIFS, to TB NAS drives, and beyond.  I always had another drive for backups.  In the early part of the year, I had spanned past my 3TB multi-drive USB/eSATA NAS, and needed more.  That’s 3TB storage, and 3TB backups – so the need was pretty big.  I found a nice 12TB server and figured I could do both on there, build it as FreeNAS, use it for all my VM’s, and my other storage needs, and just carve out part of it for my backups.  I suppose partly because I was doing things too quickly, and partly because I was being cheap I guess, I went for the single ‘device’ approach.

It was a six drive Dell 2950 server, with RAID-5, providing 10TB, which was perfect for my needs.  I built it out, all was good for about 4-5 months, which was long enough for me to be confident that the approach was good.  At that point, I started to re-deploy the old NAS for some container project work, redeployed the old server for container project work, and ended up having to rebuild the workstation that I used while doing all the data transfer from the prior storage migration.

I guess you can see where this is going.  I had a drive failure on the Dell, so I picked up a new 2TB drive, and it rebuilt ok.  Phew.   About a month later, that same drive slot had been reporting an issue, so I got another 2TB drive, and swapped it out.  Note that the first time, I brought the server down, performed the RAID rebuild completely offline through the server bios, and it took about 2 days.  The second time I did the rebuild, I went with what everyone else was saying, since i was more nervous this time.  Everyone says *never* bring a server down with a bad drive like I did previously, so I ‘listened’.  I swapped the drive with the server online.  About half a day into the rebuild, another drive reported failure … and I was effed.  That was it for my storage.  It was hardware raid-5, with FreeNas ZFS on top of that, carved into multiple pools.  Everything became confused, and I can no longer access the data.  I’ve tried so many ZFS rebuild tricks, but just can’t recover enough with only 4 of 6 drives.

Moral of the story is that after 30 years, I should have been smarter, and not put backups on the same drive as the content … and I know better.  I’m also not rebuilding this stuff in my own datacenter again.  This time its all cloud.  I lost my music, my pictures, my documents – all that stuff.  But i also lost all my VM’s, which is the real pain point.   Of course I lost my backups, but at this point – it’s the same difference.  So this time it’s VM’s in AWS, micro-instances where possible, and I need to find a better solution for the personal storage (music, movies, pictures, etc).

August 1, 2016

August 1, 2016 – 7:10 am

Recently had some trouble connecting to one of my old application’s admin sites, which used https.  Seems it required reducing the settings on the cipher suites to make it work.  To do so, go into firefox’s “about:config”, search for “security.ssl3”, and set all the “rc4” settings to “false”.   In order for this not to affect my normal browsing, and reduce risk for me, I chose to download a copy of the PortableApps version of firefox, load in a new directory called “weakfox”, and set these settings for just that instance.  Then, I only use that instance to admin this old application.

https://support.mozilla.org/en-US/questions/986913

June 28, 2014

June 28, 2016 – 11:51 am

After a bit of an issue with one of my WordPress sites, I noticed that the versions I had been running had become very long in the tooth.  There were many vulnerabilities with the older version, and one had been exploited so it was time for an upgrade.  The upgrades were done for all sites, and were truly painless.  One gotcha I found though – was that I had been running the MySQL db on a very old Ubuntu 5.04 host, and the MySQL version was too old to support the upgrade of WordPress.  So I had to migrate all the db’s out of the old MySQL host, into a newer DB on a newer host.  After doing that, things went pretty smooth.

November 20, 2015

November 20, 2015 – 10:49 pm

After struggling for the last week or so, with inbound SMTP failing – I have decided to steer clear of default smtp port 25 service for my network.  ISP’s get in the middle of it, annoyingly, and cause issue. Helpful tools for toubleshooting were a port check tool, and the MX Toolbox.

While there are some great services out there to work around that, I initially selected GhettoSMTP.  They were pretty straightforward in terms of signup, and then its just a few MX record updates. However, I’m an impatient person and after waiting a few hours for the forwarding to be setup, and hearing nothing – I gave it a shot, but they rejected all forwarding.  So I gave up and chose to deal with it myself.  Here’s how :

  • Obtain a free Amazon Web Service account at http://aws.amazon.com
  • Launch a free AWS instance in Amazon’s cloud (pick a standard Ubuntu linux instance)
  • Save the local key pair created when launching/initiating the new AWS instance.  Use that to test server access
  • Obtain haproxy software (this is the reason for using a Debian based instance, it’s simple) “apt-get install haproxy”
  • Edit the haproxy config (/etc/haproxy/haproxy.cfg) to setup port forwarding on a TCP (non HTTP) service to bind to port 25 and route to the external address of my locally hosted service but on another port
  • In the AWS dashboard, add an access rule for port 25 inbound
  • Setup my own firewall such that it now allows incoming SMTP on a different non-standard port (which is what I setup as the destination port in the haproxy step above)
  • Modify my public DNS MX records to use the new AWS instance (by lengthy DNS name, instead of IP) as the highest priority value
  • … wait for flood of email to arrive

And voila!  inbound email again.  Lots of it.  Now I just need to keep an eye on the AWS usage, but the free instance (class “T2.Micro”) is free, so it’s low power, low duty.  It’s a partial virtual-cpu and low memory, and all its doing is the O/S basic service plus haproxy port forwarding function.  So load should be very low, so it should be good.

August 23, 2015

August 23, 2015 – 2:14 pm

Tired of all the hack attempts against my wordpress installs.  So finally :

  • upgrade to latest release
  • installed brute force attack plugins
  • installed security plugins
  • created .htaccess files in each deployment, which is from here, and looked like this :

# protect xmlrpc from https://perishablepress.com/wordpress-xmlrpc-pingback-vulnerability/
<Files xmlrpc.php>
Order Deny,Allow
Deny from all
Allow from 192.168.1
</Files>

# protect wp-cron from https://perishablepress.com/wordpress-xmlrpc-pingback-vulnerability/
<Files wp-cron.php>
Order Deny,Allow
Deny from all
Allow from 192.168.1
</Files>

January 12, 2015

January 12, 2015 – 10:23 pm

I had been looking for a way to do some “date math” in linux.  Normally I use perl and the date manipulation modules, but this time I needed something that would work in linux without the extras.  Note that this won’t work in Solaris or AIX unless they have the updated GNU date stuff installed – but at least on Linux this works.  As an example here, I’m looking to calculate the difference in hours, minutes and seconds between two dates that are relatively close to each other.  In my case, they’ll never be more than 24 hours apart, so this is straightforward.  The trick is that to do anything beyond a simple “day” based date, it requires converting back to epoch form.  If you simply want the difference between two calendar days, its alot simpler than this.

  • Start with date 1, and convert to count of seconds in epoch form :

$ date -d “2014-01-13T04:30:30” +%s1389562230

  • Now get the second date and convert it to count of seconds in epoch form :

$ date -d “2014-01-12T21:15:15” +%s1389536115

  • Since we now have two epoch times, we can perform a difference between them using good old basic math with “expr” :

$ expr 1389562230 – 138953611526115

  • We now have a count of seconds between the dates, so lets convert that count of seconds to something usable like hours, minutes, and seconds :

$ date -u -d @26115 +”%T”07:15:15

So now that we have the approach, this can all be summed up into a basic one liner :

$ date -u -d @$(expr `date -d “2014-01-13T04:30:30” +%s` – `date -d “2014-01-12T21:15:15” +%s`) +”%T”07:15:15

The key is just remembering to feed the date in with the form that includes the magic “T” in the middle, which allows the addition of the time.  Hard a hard time finding that one.  So feed the dates in with the form of :

“YYYY-MM-DD”+”T”+”HH:MM:SS”

And all works out pretty well, and no need for any special perl modules.  All done inside a bash script.

October 5, 2014

October 5, 2014 – 7:58 pm

After the whole recent shellshock vulnerability issue, I decided it was finally time to upgrade my web hosts.  I have upgraded (and replatformed) many times over the years :

  • I think the first web host I had ever exposed to the outside world from my home network was a RedHat 5.2 whitebox server.  That was way way back – maybe 1996?  I know I had that up and running when I started a job in Jan 1997, so it had to be about ’96.
  • After that was a Caldera Linux host – I think it was version 1.1 back then.
  • Eventually that got to a Caldera 2.3 host which I liked quite a bit – and was *super* responsive.  To this day, that was still the best performing web host I ever had and it was on 486 hardware if memory serves … maybe a Cyrix or Nexgen 586.
  • Due to the need for some other features , I then went to a (gasp) Windows NT 4.0 server, sitting behind WinProxy on a kickin Dual Pentium Pro 200Mhz!   Man that server cost a bundle.
  • After that it was a Turbo Linux 4.0 host – which was fun, with its software based load balancing.  That was a tough time and I went through about 3 iterations of the Turbo Linux host due to a rash of bad Seagate drives, and these being white box single drive hosts.  Lessons learned, after that it was better hardware like scsi based whiteboxes with multiple drives.
  • Then when my Turbo Linux 6 host failed, it was easier to update.
  • I think I tried a Solaris 2.6 intel in there … if memory serves, even a Solaris 8 intel as well (which I know I used for both mail and dns)  but never really cared for that.
  • I know I did a lot with virtualization and DL580’s after that, and I’m missing an update after that one (maybe Ubuntu 5.04?) –
  • I’m pretty sure that takes us to my “most recent” host which has been Fedora Core 8 for the better part of the last many years and served me quite well.   I’m not a RedHat fan, so it’s surprising I’ve kept it around as long as I have.

These have all been VM based setup for a while now, and has migrated with me over the years across many iterations of hardware and virtualization platforms (xen, vmware, etc).  But after shellshock, my Fedora had finally reached the end of the line – just too hard to patch.

So this weekend, I pulled down the latest LAMP instance from Turnkey Linux (love those), and migrated all my stuff over to the version 13 LAMP stack in about 2 hours.  These “every couple year” migrations have taught me to keep things in consistent locations, single filesystem, using as standard a package set as possible (to be portable across distros), and try not to go banana’s customizing.  That’s what allows these fairly fast migrations.  So now I’m on Debian 7, automatic daily patching for vulnerabilities, denyhosts implemented, and things look good.

Unfortunately, at the same time as all that was going, I lost my HP-dv7-3065dx laptop to a bad drive, lost yet another living room mini switch for my entertainment center (cisco sd208), and my circa 2003 Sony subwoofer is on the fritz now too.  Gotta take the good with the bad I guess.

April 30, 2014

April 30, 2014 – 5:52 pm

I’ve dealt with this issue for quite a while, but never really recorded the info.  Now that I forget more and more of these fixes, its time to start writing them down.  I often have Linux servers with Samba shares, and some of these are still the older releases of Samba.  This causes issues for Vista and Win7 clients that are trying to connect to them.

If its home versions, the fix there is to bring up regedit, navigate to “HKLM|System|CurrentControlSet|Control|Lsa”, create a new DWORD value called “LmCompatibilityLevel“, force it to decimal instead of the default hex value, and set it to “1”.   If its the enterprise version, navigate to the same key, which already exists, and change the value from “3” to “1”.  Once the key/hive is saved, everything works just fine.

In enterprise versions, this should also work :

  • Run “secpol.msc” from Start|Run
  • Navigate to “Security Settings”, “Local Policy”, “Security Options”, and the item is “Network Security : LAN Manager Authentication Level” key
  • Set that to  “Send LM & NTLM Responses”

After posting this, found two great links that show better than I did : http://wiki.digitus.de/en/LM_%26_NTLM and http://www.groupes.polymtl.ca/gchit/?page_id=238

April 20, 2014

April 20, 2014 – 8:46 pm

After changing NAS devices some time ago, I realized I have no updated my iTunes library in quite a long time.  My drive name has changed, as have the sub folders – so it wasn’t possible to simply mount my new NAS in the old location.  This screwed up iTunes, because it can’t find anything at all.   After much searching, and finding many people with similar questions – I found a site (http://samsoft.org.uk/iTunes/scripts.asp#FindTracks) where a generous individual has shared some nice scripts that help with changing locations, among other things.  It will automatically swap an old location with the right location if there is only one suggested match.  But if there are multiple possible matches – the program pauses and asks for input (which to choose).  With more than 35k songs in my library, this is a long running script.  It has been running for 4 days so far, and due to the pauses each time it can’t make a suggestion, and the fact that I’m not in front of my PC on a 24×7 basis – this is gonna take a while.  But so far it looks very promising.

April 15, 2014

April 15, 2014 – 8:01 pm

Received my new “MagicJackPlus 2014” (MJP) upgrade kit for my gen-1 original Magic Jack.  I no longer need to have a dedicated laptop in my office for this purpose, which enables that laptop to do more work now.  The MagicJackPlus-2014 is a much bigger device, with two USB ports in it – almost like it would be an expansion hub if it was plugged into a laptop like I used to do … or if it were to be used to wireless enable the MJP like it says on the box but doesn’t include.  Either way, after setting it up, and walking through the “upgrade” process with the onscreen menu’s, I found I could make outbound calls, but could not receive inbound calls – instead I just got the default “join MJ” type of message.  I went to their website, used the LiveChat option, and in less than 5 mins, the tech had me plugging it back into my laptop, doing a “reset” on the device, and everything was good.  Great customer service, great product, works fantastic.   Mine is hardwired into my office switch, and just works great.

Note : since this is a DHCP type appliance, it required me setting up a DHCP lease reservation, based on MAC address, in order to force the device to use the IP that I wanted.   Setting that up in a new pool on my /etc/dhcpd.conf file took care of that for me.

WordPress Appliance - Powered by TurnKey Linux