MS Navision 2009 R2 Web Services and Python….Wrapping OpenSource around the Beast

24 04 2015

Just a quick code snippet today. I was building some backend processes and I needed to interface with our MS Navision ERP system to get and set some values on finished good in our inventory as they came off the manufacturing line.

First some housekeeping is in order. You need to have your Navision Web Service configured to support NTLM. I won’t go into that here, but you can find some great walkthroughs on MS’s technet and elsewhere on the internet.  Also, make sure you create a non-privileged windows account to use for sending your SOAP requests to the navision web service as it will be stored in your scripts.  When I promote this to production, I will probably add a TLS layer on the web service as well.

import suds
from suds.client import Client
from suds.transport.https import WindowsHttpAuthenticated
# Define our SUDS client for the Nav WS SOAP API
ntlm = WindowsHttpAuthenticated(username=’\\’, password=’hahahayouthoughtIdGiveYoumyPasswdLOL’)
url = ‘http://host:7047/DynamicsNAV/WS/companydb/Page/’
client = Client(url, transport=ntlm)
print (client) # this will return the methods and types available from the codeunit

In my case, I had a “Read” method to get information regarding a product.

result = client.service.Read(‘ ‘, ‘ ‘, srlno)
return result
except suds.WebFault, e:
print e
except AttributeError, e:
print e

It seems that SUDS has an issue with Attribute ‘NoneType’. I’ve written about it here

This should be enough to get you on your way to integrating external systems with MS Navision 2009. It’s a great way to get away from having direct SQL access and implementing an n-tier architecture with API’s. Unfortunately, Navision 2009 web services is a SOAP API. I guess it could be worse.

RTFM – AWS’ SDKs…..Why didn’t you say so in the first place

14 12 2014

On the heels of my recent post about taking advantage of S3.  I was debugging some issues in the processes I had created and also investigating how I would build tools for the end users who needed to manage this data.

I wanted the tools for the end users to be web based and part of an on-premise Intranet suite of tools.  The challenge here was that the end users were manipulating the local data, but my S3 data was not being kept in sync.  This is just bad.  So, I need to corall the end users with a suite of tools so they couldn’t make a mess of my nice and neatly organized data.

Along came a search result I’m sure I saw previously, but I completely glossed over it and I must say that was a HUGE fail on my part.  I discovered Amazon’s SDK’s for PHP and Python.  Immediately the scene from Pulp Fiction where Vincent Vega opens the suitcase and golden light shines upon his face and he declares “We Happy!” came to mind.  This led to a near all-nighter, while I re-factored and tested my existing python processes to take advantage of the python-boto class.  This allowed me to eliminate subprocess calls which I believe were the source of some of my bugs.

The following day, without hesitation I grabbed the PHP class and quickly got to work building some very rudimentary, yet functional web based tools for the end users.

Shortly after, I was sharing this experience with a colleague of mine Jack Jones(founder of Collabinate) about how I failed to RTFM and he was going through a similar experience with the Sales Force API.  We both had a good laugh.

Taking Advantage of S3 – Hybrid backup solutions

14 12 2014

I recently found that approximately 2/3’s of my offsite tape backup process was being consumed by data that the business also needed to be able to share with partners on an as-needed basis. It was painful to watch some coworkers ftp’ing this data over our very limited internet connection from the office.

After doing some analysis of the near TB of data I needed to store off site. I calculated I could store this data off-site on AWS’s S3 for a mere ~$20/month. Now you might say, just upgrade your internet connectivity. As odd as it may be our office is in an industrial park on the east end of the GTA and you would think that connectivity options were endless. However that is unfortunately not the case. After speaking with our MSP, local major telco’s. I discovered you either stick with the ADSL you’ve got or jump to fiber. Now from a business perspective that is a HUGE jump in operating expense. For those not in the know, we’re talking about ~$50/month vs. ~$800/month.  That is an expense which damn near impossible to sell to coffer.

So the ~$20/month operating expense that I could integrate into the existing processes looked pretty damn good. Not only did this give me an off-site DR location for critical data to one of our business units. It provides a location from which we can share that data quickly when the business partners request it. Another added bonus is the ability to limit access to the shared data by setting expiration dates on the shared data.

We had some scheduled jobs that processed this data locally already, so I took those as an example and expanded on them to push this data to S3. The existing scheduled jobs were some archaic VBS scripts. Since I’ve been on bit of a python kick lately I ported these over and moved them off to a Linux box

I found this little handy tool which served my needs perfectly. So now automated data processing which takes care of business locally and immediately duplicates is off-site and frees up a large chunk of my existing tape storage.

Next is to build tools for the stakeholders to manage this data locally more efficiently. More on this later.

Latest favourite linux distro

14 12 2014

So for those of you who know me, I’m pretty fickle when it comes to linux distros. Although, for the past couple years I’ve been partial to debian based distros for desktop environments.

For years I had been running Ubuntu desktop until the switch to Unity. For some reason Unity just didn’t excite me. So I made the switch to Mint+xfce and enjoyed the efficiency and cleanliness of the X environment. However, I’ve spent more time on laptops over the past couple years and with that I have found that ergonomically I’m more efficient at my work when I don’ t have to reach for the mouse or switch to the touch-pad.

My needs were answered when a colleague turned me onto Crunchbang. Yet another debian based distro packaged with Openbox window manager. Let me just say, if you’re into DevOps in a *nix environment, you will love how the Openbox interface will deeply satisfy your inner typist.

Crunchbang reminds me of the old Enlightenment+gnome days where you could have a sexy X environment and almost never have to take your hands off the keyboard with fantastic pre-built key bindings and an uber-easy way to edit or add your own new key bindings. I will post more on Openbox and Crunchbang as I discover more of it’s greatness. Until then, head on over to and get yourself started.

Why did I wait so long to Group and Tab

6 05 2010

So, I upgraded to Ubuntu 10.04, aka Lucid Lynx, last night and I’m taking some time to poke around.

There are some obvious items to make note of. My boot times are really freakin fast. Many thanks go to the SSD optimizations which makes the money well spent on that upgrade (Corsair CMFSSD-64GBG2D). Not only are boot times faster, application launching has a visibly improved snappiness. Anyways, let’s move on to what I really wanted to talk about – Group and Tabbed Windows.

I’ve seen this feature in the Compiz Config Manager ever since I moved to Ubuntu as my main OS nearly 3yrs ago. Now I’ve finally taken a stab at playing with it and may I say…cool! Once I figure out how to record my desktop into a suitable video format for sharing, I’ll post a demonstration. So the basic idea is that you can select multiple windows of related applications or windows containing related information and group them like you would a bunch of objects in a Visio diagram. This way when you select one window to move, they all move together. Each group also gets it’s own distinct colour “glow” so you know which windows belong to which group. Then there is tabbing, hit t and all the windows slide together into one stack. You can then flip through your windows in the stack with .

I’m going to play with this functionality some more at work and see how well it fits into my daily routine. Since I’m managing multiple terminals, or terminals and related windows. I’ll keep you posted on how much adjustment to my work flow is needed and if this improves or hinders my efficiency.

Until next time….Bazinga!

Controlling your Wifi interface with some Command Line Kungfu

26 03 2010

Here’s a little bash-fullness for y’all. This will get you in the right direction. I’ll leave the wpa_supplicant config learning up to you. It’s not all that hard….honest:)



ifconfig $iface up
wpa_supplicant -B -iwlan0 -c $config
dhclient $iface

Migrating from ACID-BASE to Snorby on Ubuntu 9.04 amd64 (Jaunty)

16 09 2009

**NOTE:  I set this up with all the gems installed locally to my user directory.  I think, but I haven’t tried, if you want to install them globally use “sudo” when doing gem install.

First thing we need is git

sudo apt-get install git
sudo apt-get install git-core

Then all the ruby on rails business:

sudo apt-get install ruby
sudo apt-get install ruby1.8-dev
sudo apt-get install rake
sudo apt-get install rubygems
sudo apt-get install rails

Add my local ruby bin path, you might want to add this to .profile

export PATH=”~/.gem/ruby/1.8/bin;$PATH”

Continue prepping our dependencies

gem install rake
gem install rails
gem install prawn
gem install mysql

We should now have all the necessary dependencies in place.
Before we jump to the setup script we need to prepare ourselves so that
the setup script doesn’t complain about not being able to drop the existing
snort tables.  If you followed the howto’s for barnyard2/acid-base you
probably have a snort database/schema in mysql.  Keep it just in case.
Setup a new db/schema using your favourite method.  I was in a hurry so I cheated
with Mysql Administrator desktop app.  For the graphically impaired let’s do this:

mysqladmin create snorby
mysql -u root
mysql> CREATE USER ‘someuser’@’localhost’ IDENTIFIED BY ‘some_pass’;
mysql> GRANT ALL PRIVILEGES ON *.snorby TO ‘someuser’@’localhost’

Now let’s go ahead and get the code:)
I have apache user_dir’s enabled so it’s all in my public_html directory

hoppers99-work from #snorby on Freenode made a good point regarding my use of ~/public_html. It’s actually a bad idea since it would expose the database config as plain text. So disregard this notion that we’re using apache to serve up Snorby. I’ve edited my post to reflect that we’re working out of /home/ NO /home//public_html.

cd ~/
git clone git://

Next we jump back to Mephux’s install procedure.
Edit the database parameters to match the above use and password

cd ~/Snorby
vi config/database.yml.example


adapter: mysql
database: name_of_snort_database_here
username: my_user
password: my_password
host: localhost

Remember to :w config/database.yml while inside vi


vi config/environment.rb

Now it’s time to run the setup script:
rake snorby:setup RAILS_ENV=production

Now we must jump to our barnyard2 config
and point barnyard2 to the new database, as I write this I’m wondering if
you can tell barnyard2 to write to two different databases in parallel?
I’ll have to confirm this in #snort. In the mean time I’ll assume no to be safe.
Change the following parameters in /etc/snort/barnyard.conf

output database: alert, mysql, user=snorby password=<some_pass> dbname=snorby host=localhost

Restart barnyard2 or give it a SIGHUP
sudo kill -HUP `cat /var/run/`

Back to our Snorby install.  It’s time to “Fire it UP!”
Remember if you have Apache running already you need to pick a
different port to listen on other than 80.  Start somewhere above 48619 (at least that’s what redhat/centos like you to use for “user services”

ruby script/server -e production -b -p 48620 -d

Next fire up your web browser and point it to:


login as Snorby/admin


%d bloggers like this: