Monday, August 13, 2012

Cycling from SF to Yosemite

OK, my oldest said I was mad, and she's right. But last week I ended up cycling for 2 days from SF up to Yosemite. It was one of those things that seemed doable knowing what it's like to ride longer distances, but still seemed daunting given the lack of ride support (disclaimer: I've done two double centuries but this was way different).

The hardest part of the trip was the climbing (no kidding), and the heat (turns out to have been over 100 on each day). As my friend texted me during the ordeal: loads of water, electrolytes, food.

The heat and hills combined to double team my sweat glands--you could almost just watch sweat pouring down my arms later in the day.

Monday, June 4, 2012

Getting my VM to talk to the office

Admittedly, without putting too much thought into this I expected to be able to configure a VM located on my home machine to connect to a work computer through an already established VPN tunnel without too much hassle.

Well, I ran into a couple bumps getting this system up and running. To be specific I needed sessions to be initiated from both directions (from work to the local VM, and from the local VM to work).

So, basically, the configuration looked like this: There's a VPN tunnel established between work and my home computer. On the home computer there's a VM that needs to reach a system at work, and the system at work needs to reach the VM at home.

Or similar to the illustration below:



So, I guess why I found this interesting is that it wasn't as simple as I initially thought. First stab was to set up the VM and bridging the network on the VM to the host systems interface. But the bridged VM ended up getting the dhcp advertised default route from my home router and therefore packets would not travel through the VPN tunnel. The routing table on the VM looks like (where 10.0.1.1 is the IP of my home router):

root@debian:~# ip route
10.0.1.0/24 dev eth0  proto kernel  scope link  src 10.0.1.48 
default via 10.0.1.1 dev eth0 


Tuesday, September 13, 2011

A little perl nugget: differences between two arrays

Something small, simple...

I have two perl arrays and I want to remove all elements in common between the two arrays, leaving only the elements in list_one that are unique to list_one.


For example pre-populate two arrays with the following values.
my @list_one = ();
push(@list_one,'a');
push(@list_one,'b');
push(@list_one,'c');

my @list_two = ();
push(@list_two,'b');


Tuesday, September 6, 2011

Setting up Perf for performance evaluation of your code

OK--some quick notes on setting up and running linux tools performance profiling (perf):

sudo apt-get install libelf-dev binutils-dev

wget http://www.kernel.org/pub/linux/kernel/v3.0/linux-3.0.4.tar.gz

tar xvfz linux-3.0.4.tar.gz

cd linux-3.0.4/tools/perf/

make

scp /usr/lib/libelf.so.1 [to-target-system]
scp /usr/lib/libbfd-2.20.1-system.20100303.so [to-target-system]


# sudo /opt/bin/perf 

usage: perf [--version] [--help] COMMAND [ARGS]

The most commonly used perf commands are:
annotate        Read perf.data (created by perf record) and display annotated code
archive         Create archive with object files with build-ids found in perf.data file
bench           General framework for benchmark suites
buildid-cache   Manage build-id cache.
buildid-list    List the buildids in a perf.data file
diff            Read two perf.data files and display the differential profile
evlist          List the event names in a perf.data file
inject          Filter to augment the events stream with additional information
kmem            Tool to trace/measure kernel memory(slab) properties
kvm             Tool to trace/measure kvm guest os
list            List all symbolic event types
lock            Analyze lock events
probe           Define new dynamic tracepoints
record          Run a command and record its profile into perf.data
report          Read perf.data (created by perf record) and display the profile
sched           Tool to trace/measure scheduler properties (latencies)
script          Read perf.data (created by perf record) and display trace output
stat            Run a command and gather performance counter statistics
test            Runs sanity tests.
timechart       Tool to visualize total system behavior during a workload
top             System profiling tool.

See 'perf help COMMAND' for more information on a specific command.



to run (C specifies which CPU, and p specifies which process to attach to):

sudo /opt/bin/perf record  -p 4040 -C 5
sudo /opt/bin/perf report -i perf.data


Monday, August 29, 2011

AT&T UVerse fiasco

I hate to complain.

But this is just one I have to post about.

I made the decision to upgrade from AT&Ts DSL service to UVerse--to get a better data rate at a lower price. A reasonable decision--to be made unreasonable by AT&T.

First the UVerse modem was delivered one day after they cut off the old DSL service. OK--so that's a day without service. You'd think they could be a bit more effective at coordinating this, or just cut off service a day later. But I guess as I found out this was to set the stage for further incompetence on AT&T's part.

The next day, with equipment now wired up and showing some signs of life. UVerse still doesn't come up. Turns out the order was written incorrectly. Should have been written to include support for the POTs line, but was written without. I'm guessing that some tech only just needs to unplug the cable from socket A and into socket B. But this new addendum order takes another 24 hours to clear.

Thursday, July 28, 2011

HP-15c is on it's way

Available in October according to one reseller.

More details can be found here:

Sounds like it will be limited to 10k. Hopefully the build quality is as good as the original (I have my doubts).

Enjoy.

Thursday, June 30, 2011

Something to crow about: Gson Json code generator

Gee Whiz--this is a real time save. Gson is a google project to serialize and deserialize json data. Good enough. The problem is that to do this you really want to drop the deserialized output into a java object for further processing.

There are some generalized object containers out there, but these didn't really do what I was looking for. Guess what I wanted was something lite and easy--no go.

So, here's the whizbang part. This site takes your json, such as: