Archive for the ‘Technology’ Category

SUSE SLES 11 for SAP HANA install fails with rebootException

Saturday, June 10th, 2017

This document applies to the the custom SLES image built for easier SAP HANA deployments located here: At the time of writing the link pointed to: sles11_sp4_b1.x86_64-0.0.10.preload.iso. My first problem with using this image is that there is no checksum to validate your dowload. Here is my md5sum: 8b5e4a223b85a7b144b55b86676938e3

I had to do a fresh load on a Dell R620 using iDRAC. As you will soon discover with SAP HANA, the hardware requirements are pretty strict and often don’t make sense (well to me at least). This particular installation had 8 x 300GB SAS, which the SAP documentation allocates to 6 x 300GB in RAID5 and 2 x RAID0 volumes for the remaining disks which it then puts into a software RAID0 for logs (why not just use the HW RAID you spent $$$ on and mirror them?). Anyway the tip for creating multiple RAID volumes on this platform is to use the Ctrl+R option at boot rather than the Wizard in the life cycle controller.

So I used iDRAC to mount the ISO as virtual CD/DVD ROM and set it to boot from there. The installation is pretty simple, you boot, select a disk, it writes out the image file and asks a few post install questions. In my case however I was presented with the target disks, but regardless of which key I pressed the installation would crash to console:

The keywords here are:

  • System Installation cancelled
  • rebootException: reboot in 120sec…

If you go digging in part 2 of the “How to install SUSE Linux Enterprise Server for SAP Business One products on SAP HANA” you will find a small section under troubleshooting that mentions a similar error. The “solution” isn’t really a solution, just prevents the auto reboot so you can troubleshoot.

So I fed back to the SAP consultant who was patiently waiting for the OS load that I was having this issue and we initially thought the ISO might be corrupt (hey wouldn’t a md5sum be handy right about now?). We downloaded a second copy each and all three matched, so that wasn’t it. He then thought about it some more and mentioned that someone had once powered the server off before getting it to work. So I used the cold boot function in iDRAC and that didn’t work either. I dug around a bit more for clues with this power thing in the back of my head and without really thinking it would make any difference whatsoever I used iDRAC to power the server down and then power it up. Would you believe it this time round I could select a disk and the installation continued successfully!

HP ML110 G6 CPU & RAM Upgrades

Friday, January 22nd, 2016

I recently upgraded some internals in my HP ML110 G6 and thought I’d post some useful details on RAM and CPU combinations. The whole thing started with a refresh of my ZFS NAS. I wanted to move from 4x2TB RAIDZ1 to 6x4TB RAIDZ2. The first thing that was needed was some additional RAM. The recommendation seems to be a minimum of 8GB RAM for ZFS and suggested is 1GB RAM per 1TB usable space. I did have 1 x 2GB HP and 1 x 4GB HP ECC modules spare and found this nice site suggesting that 4 & 8GB Kingston ECC would also work and that you can install up to 32GB (HP specs say max 16GB).

My supplier could no longer get 4GB modules so I ordered 1 x KVR1333D3E9S/8G. Unfortunately when I installed it my system would only see 2GB and then freeze. I checked my CPU specs (Pentium G6950) and it supports max 16GB. Unfortunately it seems it only does that in 4 x 4GB modules. I then managed to locate a Xeon X3430 and it successfully could see all 8GB RAM. I then added the 2 spare HP modules as well for a funny total of 14GB RAM. As this is a home NAS I’m not too worried about perfectly balanced RAM.

Some comments I saw said you needed an add-on graphics card for the Xeon processor, but this is not the case. I simply removed the Pentium and added the Xeon (with some new thermal grease of course) and it all worked. The PSU also seems to handle the 6 x 4TB Seagate NAS drives without any issues. NAS4Free runs off USB directly on the motherboard.

Here is a download link for the most recent BIOS 2011.08.26 (SP54622). Requires a dropbox account.

Edit 2017/04/25: BIOS link updated.

OpenStack: Unable to access the floating IP

Tuesday, September 15th, 2015

So as part of my foray into OpenStack I had allocated floating IPs, but had never actually tested that I could access services on them, until recently. I spent quite a bit of time delving into the router config, looking at iptables rules and tracing packets with tcpdump all in vain. Before you get in deep and dirty first check your default security group rules. It turned out to be my problem and was really easy to fix.  I was using OpenStack Kilo on CentOS 7 with Neutron Networking. selinux and iptables where enabled.

The default security group does not allow ingress traffic to pass by default. You can change that in the dashboard: Compute > Access & Security > Security Groups > select default > Manage Rules. Here you can add ICMP and other inbound mappings likes SSH and HTTP.

This CLI example allows ICMP and SSH

neutron security-group-rule-create –protocol icmp –direction ingress –remote-ip-prefix default

neutron security-group-rule-create –protocol tcp –port-range-min 22 –port-range-max 22 –direction ingress –remote-ip-prefix default

If this isn’t your problem then you can start checking your router config and iptables. Two really good guides I used were:

OpenStack: Fix “Missing” External IPs in Neutron and The Quantum L3 router and floating IPs  (References quantum, but still applies to Neutron)

This also provided a nice overview of floating IPs, but uses Nova Networking: Configuring Floating IP addresses for Networking in OpenStack Public and Private Clouds



CentOS on VMware: vmxnet3 Failed to activate dev: error 1

Wednesday, August 26th, 2015

I’ve been experimenting with vSphere 6 and vRealize Automation in my lab environment and hit an interesting problem when deploying CentOS 6 & 7 VMs. I had created a network in NSX for the tenant, which created 2 distributed port groups on my distributed switch: vxw-dsv-XX-virtualwire-1-sid-XXXX-name and vxw-vmknicPg-dvs-XX. For some reason I could only see the xw-vmknicPg-dvs group in vRealize so I made a poor assumption and assigned it to the tenant.

I deployed the VMs and when networking in CentOS started I got the following error:

vmxnet3 0000:03:00.0 ens160: intr type 3, mode 0, 2 vectors allocated

vmxnet3 0000:03:00.0 ens160: Failed to active dev: error 1

I could reproduce this error by running

rmmod vmxnet3

modprobe vmxnet3

I tried upgrading vmware tools and switched to open vm tools, but this made no difference. Eventually I manually changed the attached port group in vCenter and the error went away and the driver loaded correctly. I then went back into the tenant reservations and the vxw-dsv-XX-virtualwire now appeared and I could assign it.

glance Invalid OpenStack Identity credentials.

Wednesday, May 20th, 2015

I’ve recently been experimenting with OpenStack Juno on CentOS 7 and hit an annoying problem where the glance image-create and image-list commands would both fail with “Invalid OpenStack Identity credentials”. All my other services were fine, keystone was happy and returned all the correct information. I worked through countless posts online all describing the same problem, most were caused by issues with keystone, database setup or the auth_uri and identity_uri formatting. I checked my config files over and over and they were all correct.

I then pushed the verbosity and debug up and got the following in the logs:

DEBUG keystoneclient.session [-] REQ: curl -i -X GET http://controller:35357/ -H “Accept: application/json” -H “User-Agent: python-keystoneclient” _http_log_request /usr/lib/python2.7/site-packages/keystoneclient/
INFO urllib3.connectionpool [-] Starting new HTTP connection (2): controller
WARNING keystonemiddleware.auth_token [-] Retrying on HTTP connection exception: Unable to establish connection to http://controller:35357/

So I ran the curl command from the CLI and got:

HTTP/1.1 300 Multiple Choices
Vary: X-Auth-Token
Content-Type: application/json
Content-Length: 757
Date: Wed, 20 May 2015 10:34:15 GMT

{“versions”: {“values”: [{“status”: “stable”, “updated”: “2013-03-06T00:00:00Z”, “media-types”: [{“base”: “application/json”, “type”: “application/vnd.openstack.identity-v3+json”}, {“base”: “application/xml”, “type”: “application/vnd.openstack.identity-v3+xml”}], “id”: “v3.0”, “links”: [{“href”: “http://controller:35357/v3/”, “rel”: “self”}]}, {“status”: “stable”, “updated”: “2014-04-17T00:00:00Z”, “media-types”: [{“base”: “application/json”, “type”: “application/vnd.openstack.identity-v2.0+json”}, {“base”: “application/xml”, “type”: “application/vnd.openstack.identity-v2.0+xml”}], “id”: “v2.0”, “links”: [{“href”: “http://controller:35357/v2.0/”, “rel”: “self”}, {“href”: “”, “type”: “text/html”, “rel”: “describedby”}]}]}}[root@cm01 support]#  cu curl -i -X GET http://controller:35357/ -H “Accept: application/json” -H “User-Agent: python-keystoneclient”

Trying the URL in a browser also worked. So to me it looked like the service was running correctly.

So I thought about it and double checked that the firewall was disabled (it was). I then disabled selinux completely knowing that OpenStack was supposed to work in harmony with it. After a reboot many OpenStack services didn’t start so I then tried permissive rather than enforcing and glance image-list started working! So I checked the manual again and found I had missed a crucial step which was:

yum install openstack-selinux

The strange thing that not installing this only seems to affect glance and none of the other services.

Hope this helps someone else in the future 🙂

OSX Remote Desktop Client Fails With ‘404 not found’

Friday, July 18th, 2014

I recently had a strange issue with the OSX version of Remote Desktop Client connecting to a RDWeb server over the internet. The client kept reporting “The gateway failed to connect with message 404 not found”. Digging into the logs under the about box I found the following:

[2014-Jul-18 11:23:53] RDP (0): —– BEGIN ACTIVE CONNECTION —–
[2014-Jul-18 11:23:53] RDP (0): client version: 8.0.24875
[2014-Jul-18 11:23:53] RDP (0): Protocol state changed to: ProtocolConnectingNetwork(1)
[2014-Jul-18 11:23:53] RDP (0): correlation id: ec36d635-6a34-ba40-9e8a-76ac60330000
[2014-Jul-18 11:23:53] RDP (0): Resolved ‘’ to ‘x.x.x.x’ using NameResolveMethod_DNS(1)
[2014-Jul-18 11:23:53] RDP (0): Resolved ‘’ to ‘x.x.x.x’ using NameResolveMethod_DNS(1)
[2014-Jul-18 11:23:54] RDP (0): HTTP RPC_IN_DATA connection redirected from to
[2014-Jul-18 11:23:54] RDP (0): HTTP RPC_OUT_DATA connection redirected from to
[2014-Jul-18 11:23:54] RDP (0): Resolved ‘’ to ‘x.x.x.x’ using NameResolveMethod_DNS(1)
[2014-Jul-18 11:23:54] RDP (0): Resolved ‘’ to ‘x.x.x.x’ using NameResolveMethod_DNS(1)
[2014-Jul-18 11:23:54] RDP (0): Exception caught: Exception in file ‘../../librdp/private/httpendpoint.cpp’ at line 217
User Message : The gateway failed to connect with the message: 404 Not Found
[2014-Jul-18 11:23:54] RDP (0): Exception caught: Exception in file ‘../../librdp/private/httpendpoint.cpp’ at line 217
User Message : The gateway failed to connect with the message: 404 Not Found
[2014-Jul-18 11:23:54] RDP (0): Protocol state changed to: ProtocolDisconnecting(7)
[2014-Jul-18 11:23:54] RDP (0): Protocol state changed to: ProtocolDisconnected(8)
[2014-Jul-18 11:23:54] RDP (0): —— END ACTIVE CONNECTION ——

Which was equally useless.

Some digging around on google landed me on a MSDN blog post. Reading through the comments on page 2 (you know sometimes there is useful stuff!) someone mentioned the same error and that the redirection in IIS was to blame. Sure enough I had setup a redirect from the Default Web Site to point at the RDWeb location to make things easier for the users. Disabling the redirect fixed the problem. I then did some additional testing and enabling the redirect and selecting “Only redirect requests to content in this directory (not subdirectories)” allowed me to retain the redirect and allow the OSX RDP client to work.

IIS Redirect Settings



Dell VRTX with Nvidia Quadro K2000 and RemoteFX

Wednesday, April 2nd, 2014

I recently had to setup a HyperV environment for VDI which had to support RemoteFX for a client and I really struggled to get clarity on supported graphics cards. One of the first things I discovered was that the list of certified graphics cards for Windows Server 2012 R2 is very short and the list of recommended RemoteFX cards is even shorter. AMD is listed but they only have drivers up to Windows 2008 R2. AMD tech support confirmed this in February 2014: “The driver for Server 2012 R2 is not available at this time”. The Windows Server Catalog confirms this, yet you will find details around the internet showing Windows 2012 working with AMD, which adds to the confusion.

I didn’t have the luxury of loan cards or proven working configuration from our local suppliers, so faced with a limited set of cards I fell back onto the requirements for RemoteFX which are well documented and then worked through what the suppliers did have in stock:

  • A SLAT-capable processor
  • A DirectX 11-capable GPU with a WDDM 1.2 compatible driver.

With this in hand I managed to find the Nvidia Quadro K2000 and K4000 cards that a local supplier had in stock. There were not on the Windows Server Catalog, but they were listed as supported for Windows Server 2012 R2 on the Nvidia site by consulting the driver download details and supported DirectX 11.

These were to be installed into a Dell VTRX which allows you to map PCIe slots through to individual blades. One of the problems with adding a video card to a server is the auxiliary power requirement, fortunately the K2000 doesn’t require additional power. I did however subsequently discover that each of the full height PCIe slots do have auxiliary power connectors, but require a Dell cable (Part No. CPL-X5DNV). Coincidently the factory installed supported card from Dell is a AMD FirePro 7000. I am unsure if the power cable can be ordered separately.

For my initial testing I used Windows Server 2012 R2 and the 332.50 WHQL drivers and thankfully the card was detected and supported for RemoteFX. I then tried the same thing on HyperV Server 2012 R2 and hit an interesting bug with installing the Nvidia drivers. Running setup the installer starts, extracts the files and presents the EULA. When you accept the EULA it disappears into the background. When launching Task Manager I saw the installer still running, so I ended the process, opened a command prompt, navigated to the extracted Nvidia driver folder and ran: “setup –s –k”. This does a silent install and reboots on completion. This did the trick and got the drivers installed correctly.

Here is a screen shot of my working Nvidia Quadro K2000 card in Windows Server 2012 R2.

Windows 2012 R2 with K2000

A nice step-by-step guide for installing HyperV with RemoteFX can be found here.

Edit: After pushing this into production and ramping up users, I quickly discovered that there is a huge requirement on VRAM. A card with 2GB of VRAM could only service about 9-10 users. Of course once this problem reared it’s ugly head a quick google revealed the VRAM requirements are based on screen resolution. So caution to prospective RemoteFX users. Another consideration I then subsequently realised is that in a fail-over state the machines in the failed node that relied on RemoteFX would not be able to start if there was no spare VRAM headroom on other nodes in the cluster.

Edit: Updated part no. for the Dell cable. Thanks Greg!


Things you need to know about the Vantec NexStar MX and RAID

Wednesday, October 8th, 2008

When the Vantec NexStar MX (NST-400MX-S2) first came out I was quite excited. A dual drive housing, “how great will that not be for backups?” Then I discovered that it did not have RAID 1 support. I then looked at some of the entry level NAS devices, but they were more than I needed and cost quite a bit more. Fortunately not too long after the S2 they released a new version the SR (NST-400MX-SR) which supported RAID 0 and RAID 1. I sent Vantec USA a few technical questions specifically around RAID rebuild times and what happens in the event of a disk failing. They were pretty quick to respond. The disk light would go off in the event of a disk failing and the rebuild time was about 1GB/m. I also asked if the drive would rebuild without being attached to a machine and they said it would.

So I recently purchased the SR version and proceeded to setup 2 x 250GB SATA disks in RAID 1 (mirroring). I initially flipped through the instructions and then installed the drives in the housing. Everything was easy to install and the manual was reasonably clear and I set the jumpers to RAID 1. When I first attached the housing to my machine and started up disk management in Windows I was presented with 2 drives. I suspected I had not set the jumpers correctly but proceeded to create partitions and format the disks anyway to test it out. I then shut the housing down and opened it up and checked the jumpers and found that I had in fact not set them correctly, so I set them and pressed the little reset switch as per the documentation, but was still presented with two drives in Windows. After going through all the jumper options I could still only get two single drives. I even went as far as wiping the disks with the Seagate tools and starting again, no joy. I mailed Vantec twice, but their speedy support was now something of the past and I received no replies whatsoever. I then sent the housing back to the supplier and asked them to make it work. Two days later the housing was back and declared working. So I started it up and sure enough it was now in RAID 1. I then queried the technician who had worked on it and the trick was to press the reset button while the unit was powered on! Something that was not stated in the documentation (they suggest you press it before installing the drives). So I tested it for myself and sure enough it worked. It was actually quite cool starting up disk manager, setting the jumpers, pressing the reset switch and watching the disk config change suddenly. Obviously doing this destroys any data you had on the drives!

So the next thing to do was to simulate a drive failure and the subsequent rebuild with RAID 1. What you will notice immediately when coping data to the housing is that both drive lights will burn solid, once the copy is complete they will turn off. I then shut the housing down, removed one of the drives and started it back up again. Contrary to what Vantec support claimed, the drive light did not go off, instead it would flash on/off. Thus I would suggest each time you start your housing up let it flash through the initiation sequence and wait for both lights to stop flashing. If one continues to flash on/off at steady intervals, you have a failed drive. If you copy data to the housing while a disk is in a failed state the good drive will burn solid and the failed disk will continue to flash. I then shut the housing down reattached the drive and started it up again. It probably took about a minute or two before the rebuild started. Both lights will burn solid similar to when coping data. I did also test ejecting the housing while a rebuild was in progress (I used USB) and as soon as I did that the lights would stop flashing, so I had no idea if the rebuild was still running or not. I suspect that once again Vantec were incorrect and it’s best you leave the drive attached while the rebuild runs. I would also advise against copying data to the housing during the rebuild as this will only slow the rebuild down and potentially cause other unforeseen problems.

After all that struggling I am however very happy with the unit. The only minus (and most of the reviews mentions this) is that the fan is quite loud. I don’t leave the housing on for long (it’s just there for backups) so I don’t have to bare it for long 🙂

I hope this post helps someone  else in the future!


After some futher digging on the internet I found some posts on Newegg suggesting that Silicon Image SteelVine Manager is able to show you the RAID status of the NexStar MX. To find this go to:

Silicon Image Support Page

  • Select SiI5723 Storage Processor
  • Then Configuration Manager
  • Then your desired OS (I selected Vista)
  • I then downloaded SiI57xx SteelVine Manager for Windows 5.1.24B

For convenience a direct link can be found here

Edit: Users of the NST 360 MX with 2TB drives should read the great comments from JP

The Great Digital Print Dilemma

Friday, March 16th, 2007

I decided it was time to start printing some of my pictures. So I went to Canal Walk and gave Kodak and Photo Connection the same 15 photos to print as a test. The results were rather surprising. The prints from Kodak were really dark, something which is probably fixable. The pictures from Photo Connection, in stark contrast, were crisp and bright so it was not difficult to pick the winner.

One thing that I did however notice was that there was a serious amount of cropping and resizing going on. Kodak had cropped the top and bottom and Photo Connection had resized them to fit leaving large white borders on the sides. Both of these options did not really thrill me and what thrilled me even less was the thought of having to go and crop/resize all my pictures manually for an optimal fit.

So I went digging to find out what the deal was. The “standard” print size nowadays is jumbo which is 15 x 10cm. If you divide 10 into 15 you get a ratio of 1.5. So I then took the dimensions of my digital images which, at its largest, is: 2816 x 2112. When divided it gives you a ratio of 1.3333. “Aha!” I thought, “this explains it all, but how does one solve this?”. So I started looking at simply changing the dimensions of my pictures to fit, but that looked terrible. I then started looking around for a bulk cropping tool, but I was not very enthusiastic about it. I then happened to notice on the Kodak envelope there was an additional size option listed: 18 x 13cm. This gave me a ratio of 1.3846 which was almost perfect. I then called Photo Connection to find out if they could print this size, I was informed they did not, but there was a digital print size called 6DSC which is 15 x 11.25cm and when I crunched the numbers it worked out to a ratio of 1.3333. Horaay!

So it was off to Photo Connection with about 500 pictures. The following day I received a call to find out if I was sure I wanted all the pictures printed (I did). After all they had a special on and at R1.39 a print it was a great deal. Oh no, that only applied to the Jumbo size, this was going to set me back R2.90 a print (for a mere 1.25cm extra). I did however manage to talk them down to R2.40 a print. So I went ahead with the order. I was not disappointed. They did an excellent job, and while it cost me a lot it was worth it not having to look at cats with missing ears and people with no foreheads. I also bought photo albums specially for the size which set me back R100 each, but can take 300 photos.

So photography is working out to be an expensive, but enjoyable hobby.

Something else that Leon showed me is the ability to compile your own photo book. You can download free software from My Photo Book to compile your book, then hand in a CD
at any participating store and they produce a bound book with the cover of your choice. Pretty nifty!

HDTV (Highly Disappointing Television)

Sunday, November 12th, 2006

I went to the Sony South Africa Promotion at V&A Waterfront on Saturday. I have been keen to see what the hype is about High Definition (HD) technology and I like Sony’s products. Unfortunately it seems that either I was expecting to much or the hype of HD and Blu-Ray is, just that hype. There were a number of large LCD HD TV’s on display. Some were even displaying teasers driven by Blu-Ray. Standing 1m from any of these TV’s you could easily spot grainy-ness in background objects like cliffs. Even when applying a more realistic 4-5m viewing distance you could still see blocks and grain in a lot of the scenes. Yes this is still way better than what we were watching 5 years ago but this is not really what I was expecting at all. I was sold on the concept of clear, crisp immaculate imagery.

I got a chance to pick up their new DSLR-A100. I couldn’t really play with it so I can’t really comment on it, except on the weight and feel. The display model with a flash attachment on it seems to weigh about the same as my boss’s Nikon D70 which in my opinion is rather heavy. It does however have a nice solid feel to it. The Canon 350D is still my choice in SLR, and the new 400D is looking very hot! What I did think was very cool is the new HDD Hanycams with built in hard drives. Finally! I don’t know who would bother with the Mini-DVD cameras when they can only hold 30min of footage on a single sided disk? For the console junkies they also had PlayStation 3’s on display but you could not actually play any games on them (what may I ask was the point then?). Sadly none of the phones were on display so I could not get my hands on a P990. Which is top of my list of upgrade options in the new year.