Tuesday, December 23, 2008

Work fun and Trusted SSL, aka Quis custodiet ipsos custodes...

Some very 'fun' stuff going on these days. So at my current gig they had previously banned all external email access and instant messenger clients. No big deal for me because I can IM/E-Mail on my phone. The Websense proxy also blocks suspicious and 'against policy' websites. It's a security policy thing more than an HR thing. The client, when I was an employee, had a rash of virus outbreaks. And the 'core server network' was unprotected from the general population and the remote sites were unprotected from each other. It's pretty common, in my experience, for companies to work this way.



A week or so ago, they opened Websense up to specific external email sites. The rational was sound. Hotmail, Yahoo, GMail, they all have built in AV tech now, so it's relatively safe. Anything that gets by them is going to get by our Ironport mail gateways (Ironport rocks, by the way... If you want an email filtering solution, I'd recommend them). Well, this week they've had another virus outbreak inside the perimeter. So the loosen the reigns and get burned.... It's been a debacle tracking it down. Not sure if it's a virus/worm/trojan, I'm on the outside, the SRT tech-bridge is still on going. All of this and people are already on vacation, the staffing levels are low to begin with.



Anyway, on to the other topic. Here's an interesting observation by one of the guys at Startcom Linux. The Mozilla folks had a bug submitted because mozilla was complaining that all the sites had bad SSL certs. The helpful folks at bugzilla dug a bit and found out the bug-reporter was getting man-in-the-middle attacked... Over SSL... So it really wasn't a bug, Mozilla/Firefox was correctly saying things were fishy. Well the blogger from startcom linux (can't figure out what his/her name is) found out that some of the 'trusted SSL providers' are not to be trusted. One of Comodo's resellers issued him a mozilla.com certificate with out asking any questions if he was legit or not (he's not). So now he could set up a MitM attack and not set of the SSL cert error alarm. Now the SSL cert wouldn't be the official one, but it would be encrypted. So it would look secure, but it would be 'locked' with a different lock, a lock that your browser trusts. Because browsers have a basic list of trusted providers, any cert generated by one of those providers is assumed to be legit. The browsers (and by proxy, Mozilla, Microsoft and Apple) that the cert providers on their 'approved list' are verifying the people they hand out certificates to in some fashion. Who watches the watchmen? With this breach in the web of trust, all trust becomes suspect. How do you really know with out verifying the trust on the other end of the SSL connection yourself? How on earth would you ask some one at Bank Of America if this SSL certificate was the real certificate? And the web of trust was supposed to protect me from this.



Anyway, it brings to mind the cyber-crime of the century. In the summer you start infecting machines and inserting your proxy for amazon.com into machines and then cleaning up the traces of the infection. So you clean your tracks and the person is none the wiser. You just sit and wait. Wait until the busy holiday shopping season. Then you quietly intercept the credit card numbers, dates and SVNs. And still you wait. Then slowly, you clone that information onto new cards. And then you go on shopping spree after shopping spree. You also take out your list of enemies and send them a plasma TV or two, to their real address with their real name. You do it slowly and cautiously so they never put it together that all the cards in common came from Amazon between the summer and xmas. Or perhaps instead of Amazon, you take advantage of the heavily consolidated American banking industry and siphon the money right out of their accounts. All of the pieces are there. Laundering the money would probably be your toughest hurdle, and even that's not too hard. Scary stuff.










More fun with Time Machine....

So I'm finding a few odd ball things that the TM restore caused me. The first one is the most annoying. So I have an AppleTV set to sync with my iMac. And I have several GB of purchased music, movies and TV shows synced from the iMac to the ATV. Well, when you restore your time machine settings, the iTunes authorization is lost. And iTunes doesn't ask/warn you about that until you try to play a protected track. So I launch iTunes, it dutifully syncs the ATV, but the computer isn't authorized so it proceeds to remove those items from the ATV. AAAAAARG! It takes for ever to sync that many files of that size to the ATV. And due to some flakiness in ATV syncing, movies rarely sync on the first try. (I'm not alone in this, it's fairly common on the support forum.) They sync eventually but it take several tries. So that was no fun.


A few games lost their registration too. More fun...


Listened to: God Rest Ye Merry Gentlemen/We Three Kings f/Sarah McLachlan from the album "Barenaked For The Holidays" by Barenaked Ladies


Tuesday, December 16, 2008

Time Machine part II

A few more little oddities with my Time Machine restore.


1. Mail needs to be completely re-imported. So the caches for Mail are not maintained in Time Machine. This is interesting but not surprising. The files that those caches are built from are maintained. And you can restore individual messages via the time machine interface. But I'm still waiting for 27,861 messages to 'import'. Some how I suspect the 9 minutes remaining estimate is a bit off.


2. Deep node traversal is problematic on the first backup. If you ever to a time machine backup and it takes for ever to get past the 'preparing to backup' then you've seen the deep node traversal thing. If you're geeky like me you also notice these lines:


Dec 15 23:53:39 rwhiffen /System/Library/CoreServices/backupd[340]: Node requires deep traversal:/ reason:kFSEDBEventFlagMustScanSubDirs|kFSEDBEventFlagReasonEventDBUntrustable|


in your /var/log/system.log (you'll have to sudo to read it from terminal or use the console.app).


I also noticed this gem:


Dec 15 23:53:39 rwhiffen /System/Library/CoreServices/backupd[340]: Event store UUIDs don't match for volume: Macintosh HD


Uh oh...don't like the sound of that. "Little Snitch" also complained that it's rules database checksums didn't match. I guess this is understandable, but it makes me wonder what else isn't the same.



Listened to: Sleigh Ride from the album "A Very Special Christmas 2" by Debbie Gibson









Twitter add-on for Ecto

So I write my blog posts in Ecto, a handy front end for just about any blog software out there. It comes with a few plug-ins. Like, an iTunes plugin which will put a link of your current iTunes track:

Listened to: Margarittaville from the album "Beaches" by Jimmy Buffett

To let people know what you're listening to. Good thing it's not automatic or people might find out about my extensive Kelly Clarkson and bad 80's Pop music.

Anyway, there's a Twitter add-on that is supposed to post a status update to twitter (and then facebook via linking... Viva La Web Services!) and I'm trying to get it to work. So far, I've published 2 posts and no dice. Maybe the 3rd time is the charm.

Ohhh Next up on the rando-play:

Listened to: Seven Little Indians from the album "Stolen Moments" by John Hiatt

 

Awesome song (thanks Heather).

Restoring my iMac via Time Machine...

So my iMac has been a tad flaky lately. It's not something I can easily pinpoint. Something just isn't right. More occurrences of the the spinning rainbow wheel, simple actions causing the finder not to respond, stuff like that.   The kind of stuff that makes pseudo technical people say thinks like "it must be spyware" or "after a while when you get too many files the machine bogs down" or "the machines old". My favorite is the windows standard refrain "Defragment your C-drive" which can be valid, but more often than not isn't the real problem. I thought I was on to something when I noticed an app that would go in and out of "not responding" in activity monitor, but that turned into a dead end. So I decided to look at my disk utilization and my drive was fairly fragmented. More importantly, I had data from the top to the bottom of the drive. It would have been interesting to see if the pagefile was split between two far flung tracks. So I decided to run Drive Genius and defragment the drive overnight tonight. I boot up the DVD and it proceeds to do a health check and reports an error (which I don't have in front of me right now). Interestingly Disk Utility reported no such error when booted from the disk. But for what ever reason, this drive had something wrong with the filesystem. No problem, I have recent time machine backups, I'll just reformat and restore.


Oh, if it was that simple for me. So it turns out that my version of iMac (came installed with 10.4) the Leopard (10.5) DVD, a blank internal disk drive, and a valid time machine backup isn't very simple to restore. The mistake I made was going into disk manager and erasing "Macintosh HD" since Time Machine was going to do it anyway. But if the drive had died and I put a new one in to replace it, I would have hit this issue too it seems. For reasons that aren't clear yet (AKA google didn't have an answer on the first page of hits) Time Machine won't restore to an unformatted drive, even though it formats the drive during the restore (I come to find out later). Had I known it was going to reformat the drive on it's own I wouldn't have done it myself. Anyway, I had to install 10.5 on the drive first (45 minutes), then reboot, reboot the DVD again and then do a restore. 4 hours later, my 120Gb of data is back on the internal drive.


Then the rest of the fun. So Time Machine backs up every file that changes except for files that the OS can rebuild on it's own, like app caches and stuff. If you download a lot of crap, like linux ISO files and other large files off the net, (especially if they have .RAR and .PAR2 files) you will quickly fill your time machine drive with unnecessary junk. Virtual machine drives are a another great example of this extra backups. They change all the time. When I ran one of my VM's all day, I had 8 hourly backups of a 15Gb file. Yikes! The solution is to add exclusions to your time machine config via the "Options" page of the preference pane. Then you handle backups of the troublesome directories manually. Which is great until you forget to handle them manually right before you reformat your drive. DOH! Fortunately it's not something I can't download again, but on the other hand, it was a lot of bandwidth used. Then again, since I probably couldn't tell you what was in those two temp folders, it's probably a good indicator I didn't really need them.


Anyway, the restore is done and looks like I'm back in operation. My return to service time 4 hours 11 minutes. On the plus side the dishwasher is loaded, the counters wiped down, laundry folded and ironed.



Listened to: Why Can't I Fall In Love from the album "Pump Up The Volume Original Soundtrack" by Ivan Neville

Looking to get more credentials....

So I've come to the conclusion (in a very round about way, with much flip-flopping involved) that I want to have more certifications and affiliations. I'm currently a Sun SCSA and SCNA. I am also a member of the Microsoft Partner program. Neither of those is going to differentiate me very much. So I'm pursuing other certifications and affiliations.



My thought is to get the base, and maybe the SCSE SNIA cert as the foundation. Then add the EMC or HP (depending on what I'm working with in my current engagement at the time). If it goes well and I stick to it, I'd also upgrade my Sun cert to Solaris 10, and possibly pick up the Sun security certification as well to put a security cert on my resume. Alternatively the CISSP would be a great choice as well, but I think the study time would be too long.


And just to make things more complicated, Sun has a special offer for the solaris 10 upgrade exam. So for $99 you can take the upgrade test instead of the usual $200. Hmmm, life is never simple.



Saturday, December 6, 2008

Christmas Greetings from Jim Pugh

I received a christmas card from Jim Pugh/Pastel Motif today.


jimpugh.jpg


While I suspect all purchasers of the CD are getting the card, I like to think mine was special :-)


Jim needs to put of a christmas album or MP3 download...


Tuesday, December 2, 2008

Cozume diving picture tease

Started uploading pictures from our trip to Cozumel for some Diving. So far only a few inidivudal dives and a big 'blob' off all the pictures in their 'RAW' format (no culling the bad ones, no croping, etc). Anyway, you can take a peek here.


Updating my scuba pictures

I've decided to install another copy of Gallery2 to manage my pictures. This will make it easier for me to manage over the long term than my previous method of creating individual directories with HTML wrappers on it. Anyway, http://rich.whiffen.org/pictures is live. I've started uploading pictures now and will be putting more up over time. Then the hard part of going back and editing the descriptions and such. It never ends.


The Scuba section is here and the "Misc Pictures" is here. That's all there is so far.


Right now I have scuba pictures broken down by year, but it occurs to me that by location might be a better choice. So don't be surprised if it changes back and forth a few times over the next few weeks as I change my mind. I tend to do that it seems.


Thursday, November 20, 2008

Bluetooth phone project

So I've had this project idea for a while. I bought a 'retro handset' from Thinkgeek.com a while back. I also have a 'red phone' from a few jobs ago.




DSC04348.JPG


So my thought is, to combine the two and make a bluetooth desk phone. I've taken them both apart and have hit a few snags already.




DSC04351.JPG


First, I seem to have lost yet another soldering iron. Second, all the heft from the red handset comes from the magnets in the mic and speaker. Third, there's a physical button on the electronics that will need to be dealt with to operate the handset.




DSC04353.JPG


There are actually 3 buttons on the unit. I suspect the other two are volume up and volume down. (I also seem to have mixed up which one is the 'on' button but I have a 1 in 3 chance of getting that right with some trial and error).


DSC04354.JPG


I put a quarter in this shot so you can see how small the electronics are. I should be able to squeeze that into the base just about anywhere. I wish the red phone was part of a dicta-phone system, because then it would have a 'record' button in the handset that would be perfect. Oh well, maybe I can make the two 'off hook' buttons in the cradle work for me or put a button in the middle of the rotary dial. I intend to make the USB port go where the RJ-11 jack in the back of the base goes and use the existing handset cord to rout the sound to the speaker and mic. This is all contingent on me being able to figure out how to get the button to work in the base. If not I'll cram it all into the handset (like the original) and make a new button hole. It'll look ugly, but hey, I'm an amateur with unsteady hands.


More updates as they come...


Monday, November 17, 2008

So long Kevin and Karl

So, I've been delaying this post for a while now. But it's time to put the issue to rest for me personally. A few weeks ago (sometime towards the end of October) Robert Cray let Kevin Hayes and Karl Sevareid go from The Robert Cray Band. We had a lot of discussion about it on the Robert Cray Fan Club. That, by itself was noteworthy. Its been a quiet year on the board from an RCB perspective. Anyway, I pretty much said my peace there, but want to re-iterate it here.


It is a bitter pill to swallow. This line up has been in place for over 16 years, eight studio albums, over 1000 live performances, not to mention the compilation albums. It may be over a private matter, but it's a pubic divorce and all diehard fans are going to be affected. So while we shouldn't publicly speculate about the private reasons why, it's fair for us to discuss and lament it's effect. For me, for now, it's not going to be "The Robert Cray Band" for a while. They broke up. Now it's Robert Cray featuring Jim Pugh. I'm sure this feeling will pass, just like when the Horn's left, but for now I'm still saddened by the news. Karl and Kevin were more than just band members, they were contributers. They have song writing credits across many albums. They worked so well together.


All of the best RCB shows I've seen have been with this foursome. Sometimes with the horns, some times not. In the late 80's I caught the RCB in Minnapolis. Back when it was the Richard Cousins, Peter Boe and David Olson. I don't have the same enthrallment memories of this show as I do the shows I saw in 2000 onward. I didn't get to see too many RCB shows before moving to Washington, DC. The MSP show and one appearance at the Bayfront Blues Fest in 1995 or 1996, I forget which year. I don't know if it's me that's changed or the band, but things just seemed 'better' from 2000 onward. It's probably just nostalgia sneaking up on me.


So now it's on with the Robert Cray group. Still can't call it "band" yet.


P.S. Kevin or Karl, if you're out there, drop me an email.... I'd love to know where you land next.


Friday, November 14, 2008

EMC and cloud storage

Well this is more a collection of stuff for me to read later than a real blog post. EMC has announced their Atmos product. Atmos is the software that used to be called maui. It is used to store data on their "Hulk" product. The high-density, Low-cost storage array. So the tubes have been a buzz with info and questions about the product.


For example, Robin Harris has a link and some discussion about the theoretical underpinnings of the product, and the project in which the idea is based. Clearly the EMC product isn't exactly the "oceanstore" product but it's fairly close.


My favorite storage blogger Chris M Evans wrote up some pretty good summaries and links to even more good posts on the topic. He has the same practical matter questions that I have. How does it actually get done? How is going to handle failures and do it's real work? I'm sure EMC has reasonable methods to handle them, but the idea is different from how we do things today.


My #2 goto Beth Pariseau has an equally compelling look and collection of links about the Atmos product as well. (Sorry Beth, you get paid to write so I have to view it with a skeptical 'payola' eye. It's not you, its me. As such you'll have to settle for my #2 spot (as if it matters to anyone other than me...)).


I have the same basic questions a lot of others seem to have:



  • How's it going to work in the real world? How do I, in my point-to-point DS3 world make this work? If the answer is 'bigger pipes' then they've just eliminated a huge customer base. The large scale companies would have to buy it and then re-sell to smaller companies ala Amazon's S3. Well, that hasn't been stellar, and I don't have my data in hand, a company who could be engaging in the next version of credit default swaps does. How do I back this up? Do I need to? How does compliance work?


  • Haven't we heard this song and dance before? Storage as a service has been tried and was pretty much a disaster. People want to own it. They want their hands on it. I think people are willing to tollerate their 'in flight' data being in someone else's hands, but data at rest is another issue all together. How do we convince CIO and CTOs that this isn't a remix on a bad B-side single? They've heard SaaS. They've heard Utility Computing. They've heard Grid computing (Twice, no less). Now Cloud storage is some how different. Not feeling the 'gotta have it' need to run right out and get it.


  • Overhead much? Caching, N-levels of replication, Rich Meta data, Single Unified Name space... None of this is storage overhead. This is all CPU and Network overhead. Of the three components, CPU, Network and Spindles, they pile on the two that are the hardest to incrementally grow? Where does this caching and metadata 'live'? Who/What maintains consistency with out incurring murderous latency


Anyway, I'm sure the EMC partisans will march out with their explanations. As the product actually ships to live customers who are using it for their core unstructured data store (not some stove-piped sub-group within Dell, for example) I'm sure these questions will all come to pass. For now, I'm sitting on the sidelines trying to get my mind around what's real and what's marketing fluff.


The other big question I can't seem to answer is what business problem is this solving? Reliability? Cost? Performance? I have a hard time believing this will help costs or performance, so that leads to reliability? I'm sure I'll have more thoughts as time permits me to read the glut of information flowing out these days.


Wednesday, October 22, 2008

Music Brainz Picard

So my MP3 library's been a mess for a long time. I have a large folder full of MP3's called "Bad Tags". They're the tracks I don't listen to very often and have tags that are messed up. In most cases I'm too lazy to get the CD out of the attic and re-rip it and it would take forever to re-type them. My most common tag issue is truncated fields or the infamous "Track 01" song name from back in the days of using Real Jukebox. Man, those were the days. Then Sound Jam on the Mac. Nostalgia! Anyway, lots of bad albums.


Enter Music Brainz Picard. It makes cleaning them up a breeze! If you have MP3 tags that are wrong and want an easy tool to clean them up in, Music Brainz is the best free tool I've ever used. I've used some of the 'fee' based ones. They're good and all, but I don't like to pay for a tool I'm going to use once and never again. It's back-ended by the MusicBrainz meta data database and it quite extensive.


Tuesday, October 14, 2008

Heroes... love the show

So I watch a lot of shows online (hulu, iTunes, etc). I'm totally hooked on heroes since season 1. Season 2 was iffy because of the writers strike, but still pretty good. I love how the end an arc every season. But little things bug me from time to time. Here's an example: in Episode 4 of Season 3 "I Am Become Death". Future Peter and Present Peter are walking down the street and Peter Says they can't stay because they're looking for him. They know how to become invisible and can stop time. Seriously. Stop time or go invisible. They can fly. Why run? Fly away. They have telekinesis and other 'fire' type abilities. They can phase like Niko's dad (forget the name). The list goes on. Peter is just too dumb in one second, then clever about others. It bugs me.


Anyway, just my rant.


Cheers,


Rich


Jim Pugh/Pastel Motif

Jim Pugh (Robert Cray Band Keyboard maestro) has come out with a solo project.


http://pastelmotif.com/


As Jim himself put it:



Hi everybody!



For all those who enjoy deep spul groove Hammond B3 instrumental music check out



JimPugh/pastelMotif at PastelMotif.com.



There's info on PastelMotif and you can also check out some of the tunes.



Cheers.



There's a lot of great info about Jim on the site as well. Check out the Bio Page in particular. He has a few samples from the project up on the website.


Monday, September 29, 2008

Yeah but when can I buy it?

So I came across a slashdot article about new solar cells that set some efficiency records. Almost all of the initial comments say the same thing: Yeah, but when can I buy it? That's the trouble with a lot of the new technology announcements in todays world. People want it now. They don't want to know that a breakthrough has been made, they want to know how it's going to make their life better, products cheaper, or things faster.


This 'improvement' isn't going to help 'general' solar usage much at all from what I can tell. Yeah, it may be very efficient, making it great for space/satalite usage, but becuase it uses gallium and indium. Scarce and expensive (not to mention a little toxic) elements. The goal would be to get efficient solar cells that don't rely on scarce materials to work. Oh well, it's a step in the right direction.


Tuesday, September 2, 2008

A storage blog battle Royal...

So there's been a bit of a dust-up going on over mid-tier storage going on. Chuck Hollis from EMC posted an interesting idea about evaluating "Usable Capacity" for arrays. He focused on an exchange use case. He compared the recently released CX4, NetApp's FAS series, and HP's EVA. Three very competitive models. He then proceeds to say why he thinks the CX4 is a better value because it achieves a 70% Storage Capacity Efficiency. That is, 70% of the raw is usable. He then does the same thing for the other two giving an HP EVA 47.2% and NetApp 33.61%. There's a pretty good set of links associated with it to show he's not making these numbers up. Problem is, like with anything, there's more than one way to do something. And boy did NetApp fans (and some employees) and EVA fans (and some employees) let him have it. There was also a follow-up post by Chuck Hollis And in the comments for both posts, NetApp and EVA champions throw some counter-evidence and poke holes in the configurations (which were worse case in my opinion) Chuck used that made the other offerings suffer in comparison. So when you change configurations to ones that favor NetApp or HP, they each best EMC (of course they do...). The truth is always somewhere in the middle.


The tit-for-tat bickering that went on in the comments ran the gamut. Some were very insightful and pointed out areas that Chuck should consider. Chuck also responded to some of the comments in a respectable (although somewhat antagonistic) manor. Some of the comments were down right rude. I guess an average day on the internet, come to think of it. My beef is Chuck's taking the worst case or outdated case scenarios for the competitors while using the new-car-smell versions of EMC's products. He hides behind the idea of "show me the document on their website that says that" mantra. Some commenters rightfully called him out to produce the same for his gear. This all took place before the long labor-day weekend so it'll be interesting to see if it keeps it's momentum this week or if the down time has tempered people's zeal.


It also reminded me of a bit from Comedy Central's The Daily Show from back in May. In it Terry McAuliffe (Hillary campaign staff) and Chris Matthews (MSNBC commentator) engage in what Jon Stewart calls a "West Virginia Douche-Off". I got that kind of vibe from from the back and forth between the NetApp and EMC crowd. You can see the clip here (it's the last 1:42 of the clip, starts around the 6-minute mark):









Tip of the hat to Chris M Evans who's blog post turned me onto the drama


Tuesday, August 26, 2008

Testing the job market

My current contract is starting to wind down and as such I'm testing the job market to see what's out there. So far I'm only putting up Resume's on the various job sites. When I last did this, the economy was still doing pretty well and I was flooded with offers within three days of publishing my resume on sites like monster.com, Dice.com and washingtonpost.com. This time the amount of inquiries is significantly lower. It's not quite an apples to apples comparison since I have different qualifications and am an independent contractor, but I would expect the buzz-word bingo to have at least come up with some hits.


I did get one that I wouldn't touch with a ten foot poll. First off, the hourly rate is pretty low for me at $35/hr. Also it listed it in a strange way:



Rate/hr on W2 (Without any Benefits): $35/hr


Rate/hr on W2 without any benefits? IT people who do W2's typically don't do hourly, it's just too expenisive to have us not be exempt from overtime. Then there was the grammar of the letter:


I came across your resume through monster. We have the following very urgent opening for Network Administrator. Below are the requirement details, just go through it and reply back to me with your updated resume and your acceptance for this position. Please do reply ASAP. Your earliest reply is highly appreciated.


And


I will wait for your response.

Please revert back to me with your most recent resume on my E mail XXXX@XXXXX.com




Uh..... thanks, but no thanks.

Anyway, hopefully things will pick up after this weekend when the batch processes run and get distributed to the various bodyshoppers and head hunters. But if it doesn't my current gig hasn't ended just yet so I should count myself lucky.


Monday, July 28, 2008

Its a cruel cuil world...

So Cuil came online, going head to head with Google (who doesn't these days?) and they hit the web with a splash, followed by a 'thud'




200807281802.jpg


Learning the hard lessons of high-traffic I guess. On the other hand I suspect they're getting pounded by curiosity seekers like myself. The few times I did get results, I was unimpressed by the results so far. Plus the graphics they choose to put by the entries are sometimes right on, some times very, very wrong.


Still, I see potential, but then again, ask.com has potential too, but people aren't flocking to it.


Time will tell.


Thursday, July 24, 2008

I/O and virtual machines...

Robin Harris has an interesting post:


The Virtual Machine I/O Blender


In the post he brings up the topic of what Virtual Machines do to the pieces of the storage puzzle that have have been optimized based upon traditional I/O patterns. Does the Empirical data gained through the years help at all when you pile virtual machines on it? Looks like the answer is: Not so much. Which brings up the argument about 'stupid vs smart' storage. So fast dumb storage would be preferable to fast smart storage it seems when you're in the virtual machine arena. It's an interesting idea. Should we purchase lots of (presumably) cheaper 'dumb' storage or bigger/faster smart storage? Reminds me of the commodity server hardware vs high-end server hardware argument from a few years back. The price point won the day back then, and I suspect if storage goes the same way, it will rule storage too. There's no universal rule on what will be best for all situations, so there will be room in the market for DMX's and Tagmas, but I suspect cheaper/dumb-ish storage with VM's in mind is going to start to creep into the market place. It'll focus on random I/Os rather than trying guess via-caching what the host is going to ask for next.


If you get a chance, check out Robin's article. In particular, read the comments. Even more good thoughts in there.


Wednesday, July 9, 2008

Diving in Croatia

Some notes and thoughts about diving on Hvar, Croatia in June, 2008.


Went diving out of of the city of Jelsa on the island of Hvar in Croatia.



View Larger Map

I used the Divercenter Jelsa in Jelsa. They took me out in their Bombard Explorer with a pair of Yamaha 100's on it. It got you to the dive site pretty quickly. The visibility was great and the water was very blue. The first dive (6/24) was to an island called "Nudist Island". The name comes from a nudist camp on the main island across the channel.



4.jpg

The dive was to a max depth of 111' and had a bottom time of 0:42. I started out with 225 Bar and ended with ~25 Bar. The visibility was easily 50'. There was several neat things to see. It was only me an Wolf, the dive master. I was a bit too excited and used a lot more air then I expected. The dive went deep early and then gradually up the incline to the shallower water. Spent the last 15 minutes or more at 30' or less.

15.jpg



10.jpgMore photos from the dive are here



The second day (6/25) was 'plan raut' (not sure of the spelling). It was just me and Wolf again. This dive was to 107' and had a bottom time of 48 minutes. Again, the last part of the dive was at 30' or less. My camera locked up on me once on this dive. The auto-shutoff turned the camera off and Wolf found a nudibranch. I pushed the power on, and then pushed the 'close up' setting (probably too quickly) and the screen went 'pink'. I was bummed. I held the power button down for 5 seconds and it came back to life, but we had moved on from there by then.

29.jpg

The 2nd dive of the day was to Steenova.... I might have the name wrong, I'm trying to find a map so i can find the exact spot. At any rate, there were 4 divers on this trip. Not speaking German was a real detriment on this dive. The cove has straight rock walls and the water was as blue as you can imagine. The walls were full of holes and over hangs with lost of things to investigate. The dive had a max depth of 89' and a bottom time of 46 minutes. I had a squeeze in my right ear that made me miss a nice photo of a lobster. Got another shot at a nudibranch though

56.jpgMore photos from the dive are here

The last dive was to Zala Luka. It was another awsome dive. The water was clear and the sun was shining bright, so the vis was incredible. It was to a max of 92' with a bottom time of 54 minutes. A lot of the dive took place at 30' or so. Lots of Anenomes and schools of fish everywhere.



93.jpg 37.jpg

It was a great diving experience and I'm so glad I went. If you're ever in the area, you have to go diving. It's incredible.

Update (12/2/2008): Added gallery links to individual dive photos for dive 1 and 2.

Tuesday, July 8, 2008

Pictures of Diving in Croatia

I've posted the pictures I took while diving in Croatia. More details to come, this is just a big dump of photos, no narrative or timeline.


More to come...


UPDATE (12/2/2008): I've migrated to a 'gallery2' system to manage my photos. The link to the pictures has been updated accordingly.


New Dive Camera!

Bought a new dive camera for my trip to Croatia. It was touch and go for a bit there. Flight was Friday afternoon and they were supposed to arrive Friday morning. Well, the timing worked out and I'm the proud owner of a DC800 from sealife. 31aaJDoTaiL._SL500_AA200_.jpg


It's an 8Mp camera and housing combination. It has 3 underwater photo modes (I'm still learning how all that works). I have quite a backlog of photos and posts to get through but hopefully once that's all done I'll be able to write a bit more about the camera. So far it's great, but the shutter speed isn't as fast as i'd like some times. And I did manage to crash the camera while on a dive. It does take some great pictures though.




PICT0048.JPG


My dive buddy had his xeon flashlight illuminating the fish, so that's why it's eye is an odd color. More on dive lights instead of strobes later too hopefully.


New New Dive Computer

First bit of scuba news in a while! Bought an Aeris Manta dive computer.




51mJ2n5XZsL._SS500_.jpg


It's been pretty easy to use and fits my writs nicely. I don't dive enough to buy air integrated. I really should have boughten the Atmos 2, their big hockey puck style computer, but it only has one button and it's huge. Plus there are a few other features of the manta I like as well. Not sure if I'll ever use the advanced features it has over the atmos 2 (which also seems to be discontinued) but I'd hate to have bought that and then turned around in a year or so and bought the upgrade. Anyway, 4 dives logged on the computer and so far it's great!


Wednesday, June 11, 2008

Legato notes and links...

I keep telling myself to never admin you know a backup software. If you say you know it, you always run the risk of being declared the backup expert. Well, it's happened to me. I am now on the backup team with my current client until they have the expertise in house to run their Legato 7.2 infrastructure. They had some great contractors from Edge Tech Ltd. Ricky and Co where absolute Legato ninjas. But the word came down to eliminate or convert all contrators. So they were let go, and I'm left holding the backup bag.


So here's my post collecting Legotcha links and commands.


IPnom has a great collection of man pages online


I use the nsr_group and nsr_client pages for nsradmin a lot.


Backupcentral.com has a page: What neat things can I do with nsradmin


Avisit has a list too


David Mussulman has a list of EMC Legato Networker Admin Tools

Here's some of my commands:


root@rcomanchi013:/var/tmp> nsradmin -i - . type: nsr client

show name; group

print

Get some group info



. type: NSR group

show name;status;start time;last start;last end

print



Show groups to find running status



. type: NSR group

show name;status;last start

print



# list the volumes the media server knows about that aren't “full” (!full doesn't seem to work)




mminfo -a -r 'volume,%used,pool,location' -q '!full'

To find a tape that's been cloned off site, when you know the 'onsite' tape that was backed up to:




mminfo -q volume=N31407 -r 'volume,cloneid,client'

This will spit out things like:




Volume CloneID Host

N31407 1204238003 host2.name.net.com.org.blah

N31407 1204239940 host2.name.net.com.org.blah

Then you can search for the CloneID in question:




mminfo -q cloneid=1204239940 -r 'volume'

Which will put out a list of volumes. Find the volume of the type that goes off site and you're all set.

Tuesday, June 10, 2008

Higher gas shifting the traffic jams online?

So I've been reading several articles that say that $4.00 gas is causing people to change their driving habits. Consolidating trips, canceling vacations, no free pizza delivery, tele-commuting, etc. Which made me start to think that if gas goes higher (and it probably will) and tele-work becomes more and more prevalent (I hope it does, I love working from my home office). Will the traffic jams in the DC Metro area move from the real highway to the information highway? ISP's are already cracking down on P2P and other high-bandwidth apps. What's going to happen when a few thousand go2mypc users hope on the net on your local DSLAM or its Cable equivalent. Will Verizion regret giving such high speeds to FIOS users initially? I guess on the plus side, adding more lanes to the information-highway is far easier than the real highway. It'll be interesting to see how this pans out.


I should google this meme... I'd bet I'm tech-person number 32,1231 to have come up with this 'all on their own' this week.


Rich


Brocade oddness today...

Came across some unusual brocade errors today. And since google turned up nothing, I suspect it's pretty rare.


When I telnet to the box, I get the following:

Connected to 10.20.30.40.



Escape character is '^]'.



libipadm error: client connect failed /tmp/ipadm-g-login-23293-d6ba659370229

ipadm do_backtrace connect failed pid 23293, name login

/fabos/lib/libipadm.so.1.0[0xf6720c0]

/fabos/lib/libipadm.so.1.0[0xf6723a8]

/fabos/lib/libipadm.so.1.0(ipAdmLocalAddrIdGet+0x4c)[0xf6726c0]

/lib/security/pam_fabos.so[0xfd889e4]

/lib/security/pam_fabos.so(pam_sm_authenticate+0x25c)[0xfd891c8]

/lib/libpam.so[0xff8c7d0]

/lib/libpam.so(_pam_dispatch+0x2b0)[0xff8cd30]

/lib/libpam.so(pam_authenticate+0x90)[0xff8f17c]

/bin/login[0x10003254]

/lib/libc.so.6[0xfdd8930]

/lib/libc.so.6[0xfdd8a34]



Fabos Version 5.3.1



Password:



BottomSwitch:admin> ipaddrshow

libipadm error: client connect failed /tmp/ipadm-g-ipaddrshow-23398-d6bb020d2dce6

ipadm do_backtrace connect failed pid 23398, name ipaddrshow

/fabos/lib/libipadm.so.1.0[0xf8ca0c0]

/fabos/lib/libipadm.so.1.0[0xf8ca3a8]

/fabos/lib/libipadm.so.1.0(ipAdmLocalAddrIdGet+0x4c)[0xf8ca6c0]

ipaddrshow(main+0x44)[0x10006018]

/lib/libc.so.6[0xf75e930]

/lib/libc.so.6[0xf75ea34]

ipAdmLocalAddrIdGet() returned 21

BottomSwitch:admin>


The errshow gives me the following from weeks ago:



2008/05/15-02:40:35, [RAS-1001], 137,, INFO, SilkWorm48000, First failure data capture (FFDC) event occurred.

2008/05/15-02:40:41, [RAS-1001], 138,, INFO, SilkWorm48000, First failure data capture (FFDC) event occurred.

2008/05/15-02:41:03, [TRCE-1001], 139,, WARNING, SilkWorm48000, Trace dump available (Slot 6)! (reason: FFDC)

2008/05/15-02:41:03, [TRCE-1004], 140,, WARNING, SilkWorm48000, Trace dump (Slot 6) was not transferred because trace auto-FTP disabled.

2008/05/15-02:41:05, [TRCE-1001], 141,, WARNING, SilkWorm48000, Trace dump available (Slot 6)! (reason: FFDC)

2008/05/15-02:41:05, [TRCE-1004], 142,, WARNING, SilkWorm48000, Trace dump (Slot 6) was not transferred because trace auto-FTP disabled.

2008/05/15-02:42:08, [RAS-1001], 143,, INFO, SilkWorm48000, First failure data capture (FFDC) event occurred.

2008/05/15-02:42:09, [TRCE-1001], 144,, WARNING, SilkWorm48000, Trace dump available (Slot 6)! (reason: FFDC)

2008/05/15-02:42:09, [TRCE-1004], 145,, WARNING, SilkWorm48000, Trace dump (Slot 6) was not transferred because trace auto-FTP disabled.


So it looks like something got wonky with the CP's on the 15th of May. Fortunately it's a lightly used switch in the not-yet-operational DR facility. But it's a little scary. Mostly because google doesn't come up with anything for thne 'stack trace lines. The erroshow lines come up with some vague hits, but nothing of value. A little more investigation shows that this happened on both brocade 48000 director switches. Strange stuff. Rebooting the switches fixed it but it shouldn't have happened in the first place in my opinion. Now I'm off to investigate what would have happened on the 15th of May.


Friday, June 6, 2008

Mystery volume mounted on my Macs...

I have the very nifty utility Growl. It provides "useful notifications that you control". I run several growl aware apps and an add-on utility called "HardwareGrowler" which is quite awsome. It gives you a growl notification when a device is plugged in or unplugged, for example, when I plug my network drop in, I get a notificaiton that the EN0 device became active, further it tells me that it's 100Mb, etc. I have growl set to automatically remove notices after a few seconds if the machine is not idle. But idle notices stick around until I close them.


Yesterday I had a growl notification that said "Volume Mounted Keystoen-1.0.1.340" which was strange, because a volume doesn't show up in the finder.


200806061420.jpg


rwhiffen-macbook:networker rwhiffen$ df -k

Filesystem 1K-blocks Used Avail Capacity Mounted on

/dev/disk0s2 116753840 61716172 54781668 53% /

devfs 118 118 0 100% /dev

fdesc 1 1 0 100% /dev

map -hosts 0 0 0 100% /net

map auto_home 0 0 0 100% /home

/dev/disk1s2 1348 548 800 41% /Volumes/Keystone-1.0.1.340

/dev/disk2s3 97554672 60569148 36985524 62% /Volumes/external

rwhiffen-macbook:networker rwhiffen$

So now I see a new volume /Volumes/Keystone-1.0.1.340 and it has two files: Keystone.tbz install.py. And install.py has a header that says the following:



#!/usr/bin/python

# Copyright 2008 Google Inc. All rights reserved.



"""This script will install Keystone in the correct context

(system-wide or per-user). It can also uninstall Keystone. is run by

KeystoneRegistration.framework.



Example command lines for testing:

Install: install.py --install=/tmp/Keystone.tbz --root=/Users/fred

Uninstall: install.py --nuke --root=/Users/fred



Example real command lines, for user and root install and uninstall:

install.py --install Keystone.tbz

install.py --nuke

sudo install.py --install Keystone.tbz

sudo install.py --nuke



For a system-wide Keystone, the install root is "/". Run with --help

for a list of options. Use --no-processes to NOT start background

processes (e.g. launchd item).



Errors can happen if:

- we don't have write permission to install in the given root

- pieces of our install are missing



On error, we print an message on stdout and our exit status is

non-z
ero. On success, we print nothing and exit with a status of 0.

"""


So it seems a google product has mounted a volume with a python module in it. Funny thing is, if I google for it, I get zero hits. So I was a bit concerned about what it is exactly. If you bzip2 -d the tbz file and tar tvf the tar file it has a lot of files in it:

drwxr-xr-x macbuild/staff 0 2008-06-04 19:36:38 GoogleSoftwareUpdate.bundle/drwxr-xr-x macbuild/staff 0 2008-06-04 19:36:38 GoogleSoftwareUpdate.bundle/Contents/drwxr-xr-x macbuild/staff 0 2008-06-04 19:36:38 GoogleSoftwareUpdate.bundle/Contents/Frameworks/drwxr-xr-x macbuild/staff 0 2008-06-04 19:36:38 GoogleSoftwareUpdate.bundle/Contents/Frameworks/GoogleShared.framework/lrwxr-xr-x macbuild/staff 0 2008-06-04 19:36:38 GoogleSoftwareUpdate.bundle/Contents/Frameworks/GoogleShared.framework/GoogleShared -> Versions/Current/GoogleSharedlrwxr-xr-x macbuild/staff 0 2008-06-04 19:36:38 GoogleSoftwareUpdate.bundle/Contents/Frameworks/GoogleShared.framework/Resources -> Versions/Current/Resourcesdrwxr-xr-x macbuild/staff 0 2008-06-04 19:36:38 GoogleSoftwareUpdate.bundle/Contents/Frameworks/GoogleShared.framework/Versions/drwxr-xr-x macbuild/staff 0 2008-06-04 19:36:38 GoogleSoftwareUpdate.bundle/Contents/Frameworks/GoogleShared.framework/Versions/A/-rwxr-xr-x macbuild/staff 221032 2008-06-04 19:36:38 GoogleSoftwareUpdate.bundle/Contents/Frameworks/GoogleShared.framework/Versions/A/GoogleShareddrwxr-xr-x macbuild/staff 0 2008-06-04 19:36:38 GoogleSoftwareUpdate.bundle/Contents/Frameworks/GoogleShared.framework/Versions/A/Resources/-rw-r--r-- macbuild/staff 884 2008-06-04 19:36:38 GoogleSoftwareUpdate.bundle/Contents/Frameworks/GoogleShared.framework/Versions/A/Resources/Info.plistlrwxr-xr-x macbuild/staff 0 2008-06-04 19:36:38 GoogleSoftwareUpdate.bundle/Contents/Frameworks/GoogleShared.framework/Versions/Current -> Adrwxr-xr-x macbuild/staff 0 2008-06-04 19:36:38 GoogleSoftwareUpdate.bundle/Contents/Frameworks/Keystone.framework/lrwxr-xr-x macbuild/staff 0 2008-06-04 19:36:38 GoogleSoftwareUpdate.bundle/Contents/Frameworks/Keystone.framework/Keystone -> Versions/Current/Keystone



So it seems that the Google Software Update app launched itself and downloaded some kind of update. Funny thing is, I don't recall ever telling Google it was OK for it's app to do this. So I go into Google Updater and I find out that I 'kind of did'

200806061450.jpg

So I don't have the "notify me" box checked, which is why I didn't get told about it. Strange stuff. Not sure I like it.

Friday, May 30, 2008

ldap and Active directory notes...

Need a place to squirrel my notes about LDAP and Active Directory. This seems as good a place as any. Have a bunch of book marks in Del.icio.us but they're unorganized and there's no naritive to link them together.


So I'm having issues with sun's Identity sync for windows product (which they seem to have renamed). I like the product, but this issue is driving me crazy. We have an intermediate fix for now, but I need a long term fix. At any rate, here's my crib-sheet of LDAP and AD links and notes.


Finding your Active Directory Site and Domain Controllers


This is an interesting read, gives you the ldapsearch syntax to ask AD via LDAP who your domain controllers are.







Querying Active Directory with Unix LDAP tools.


Another good one, goes into some more details.





Using ldapsearch to query Active Directory


This is a Mac-slanted version of some of the same stuff covered already.




It's worth noting I'll be using the ldapsearch syntax from OpenLDAP. I used MacPorts to intsall OpenLDAP on my MacBook Pro. The Sun ldapsarch has a different syntax but can do the same things.



So one of the things I was having trouble with is finding the domain controllers so I can add the BIO domain to the ISW. I could see them, but for what ever reason ISW couldn't. I assumed that there was some issue with the username I was binding with. Turns out, that's not the case.




For the first domain I'd run the following and get:




ldapsearch -v -H ldaps://<Domain_Controller_domain1> -x -b "ou=domain controllers,DC=<DOMAIN>,DC=<FOREST>" -D "<BIND_USER>" -W "(objectclass=computer)" "distinguishedName | dNSHostName | name"



ldap_initialize( ldaps://<Domain_Controller_domain1> )


Enter LDAP Password:


filter: (objectclass=computer)


requesting: distinguishedName|dNSHostName|name


# extended LDIF


#


# LDAPv3


# base <ou=domain controllers,DC=<DOMAIN1>,DC=<FOREST>> with scope subtree


# filter: (objectclass=computer)


# requesting: distinguishedName|dNSHostName|name


#





# <DC_NAME1>, Domain Controllers, <DOMAIN1>.<FOREST>


dn: CN=<DC_NAME1>,OU=Domain Controllers,DC=<DOMAIN1>,DC=<FOREST>





# <DC_NAME2>, Domain Controllers, <DOMAIN1>.<FOREST>


dn: CN=<DC_NAME2>,OU=Domain Controllers,DC=<DOMAIN1>,DC=<FOREST>






What'd you would expect to find, the list of domain controllers. There were actually 52 in my list, but you get the idea. If you take the "distinguishedName | dNSHostName | name" off the end, it will give you a lot more details about each DC record.




ldapsearch -v -H ldaps://<Domain_Controller_DOMAIN2> -x -b "ou=domain controllers,DC=<DOMAIN2>,DC=<FOREST>" -D "<BIND_USER>" -W "(objectclass=computer)" "distinguishedName|dNSHostName|name"



ldap_initialize( ldaps://<Domain_Controller_DOMAIN2> )


Enter LDAP Password:


filter: (objectclass=computer)


requesting: distinguishedName|dNSHostName|name


# extended LDIF


#


# LDAPv3


# base <ou=domain controllers,DC=<DOMAIN2>,DC=<FOREST>> with scope subtree


# filter: (objectclass=computer)


# requesting: distinguishedName|dNSHostName|name


#





# <DC_NAME3>, BP1 Domain Controllers, Domain Controllers, <DOMAIN2>.<FOREST>


dn: CN=<DC_NAME3>,OU=BP1 Domain Controllers,OU=Domain Controllers,DC=<DOMAIN2>,DC=<FOREST>





# <DC_NAME4>, BP1 Domain Controllers, Domain Controllers, <DOMAIN2>.<FOREST>


dn: CN=<DC_NAME4>,OU=BP1 Domain Controllers,OU=Domain Controllers,DC=<DOMAIN2>,DC=<FOREST>





# <DC_NAME5>, BP2 Domain Controllers, Domain Controllers, <DOMAIN2>.<FOREST>


dn: CN= <DC_NAME5>,OU=BP2 Domain Controllers,OU=Domain Controllers,DC=<DOMAIN2>,DC=<FOREST>



Almost what you'd expect to see. And I thought it was exactly what I'd expect to see. But what I noticed later is that there's an extra OU, which adds no value, I might add, no policies are set at that OU level, inserted into the domain structure. Someone wanted to tidy up and made OU=BP2 Domain Controllers and OU=BP1 Domain Controllers and flung all the DC's into those two folders. Turns out ISW doesn't traverse sub-OU's and that's why it wasn't getting the missing DOMAIN2. Quick fix was to move a few DC's up a level, let it do the discovery and then run with it. But if I didn't have the ldapsearch ability, I probably would have never figured it out.




Further investigating has put me in the hunt for how to find out who has the PDC FMSO role via LDAP, because that seems to be where I'm falling down now. The adventure continues and I'm sure I'l be updating this...


Sun Identity Sync for Windows troubles...

Sun's Identity Sync for Windows has been a pain in my butt lately. Turns out ISW doesn't make it easy to point at new global catalogs.


Products in play:



  • MS WIndows AD (unsure of AD version right now, servers are 2k3).


  • SunONE identity Server 5.2 patch 2


  • Sun Identity Sync for WIndows 2004Q3


So what brought this about? At ${CLIENT} they have an application that for various reasons can not authenticate directly against AD for credentials and group memberships. It needs some attributes set for the user and setting those in AD was problematic, if I remember correctly. This was set up a long time ago and a lot as changed, so if we were doing this from scratch today, it's likely that this version could be made to work with AD. The latest version of this Documentum product does support native AD anyway so it's kind of a moot point. Anyway, on the 'issue' at hand.


The theory of operation is that an account is created in AD, added to the appropriate AD ou's, groups, and given the correct attributes. ISW then takes that information and creates the corresponding LDAP entries. After creation, ISW then montiors AD for changes and makes the corrisponding LDAP changes. It does this by talking/listenting to the Domain Controller (DC) with the PDC FSMO role. Now when a user changes their password, the ISW sees this. But it's not a domain controller so it can't actually capture the password. So ISW does the next best thing, it tells LDAP that the stored password it has is invalid and that it needs to verify AD for it next time the user logs in. So if I change my password, the LDAP password entry gets set to PW-NEEDS-SYNC (i'll try to find an actual entry to get the 'real' value). Anyway, when I try to log in via LDAP, the LDAP server takes the uname and password credentials I have provided and does a bind to an AD server with my credentials. It's the actual password I provided, not an encrypted hash or what not, which is why all of this is done over SSL This uname/password pair is then used to attempt a bind to the AD servers via LDAP. If the bind is succesfull, LDAP does two things: One, it lets me into the app. Two, it stores that password it just used as the new LDAP password, and presto, it has my 'changed' password. Pretty clever.


Now on to my story. So this has been working, not perfectly, but it worked, It talked to three AD servers I'll call them DC2, DC4 and VDC1. So, DC2 is my PDC role, and DC4 and VDC1 are the DC's we were using to talk to the two domains in play. Well, $CLIENT is moving their data center from Falls Church, VA to a CSC facility in Chicago. The datacenter move as been a great project. I should write up some of my experiences with that. It went about as well as could be expected. Part of this is to migrate to new domain controllers. So my DC2, DC4 and VDC1 are going away. Three big problems: One, DC2 is the SSL-CA for the enterprise (they failed to notice this and migrate it's services). Two, the AD team didn't realize they had an ISW/LDAP dependancy. Three, ISW doesn't let you simply repoint the Global Catalogs or Domain Controllers. Anyway, water under the bridge, we stood up a new enterprise CA, installed certs on the DC's and allowed LDAPS on them.


The big troubles have been with repointing ISW. Turns out you have to uninstall the ISW connectors and reinstall them to repoint them. Once you do that you can specify a new global catalog from which the ISW can learn the PDC FSMO role from. One issue we did come across is that when the ISW does a search for DC's in the GC, it does something equivelent to:


ldapsearch -v -H ldaps://<DC-hostname> -x -b "ou=domain controllers,DC=<DOMAIN TREE STUFF>t" -D "<USER TO LOG IN AS>" -W "(objectclass=computer)"


But it doesn't traverse sub-folders. So if your domain controllers are 'organized' into sub-folders, you're sunk. This happened in my case. The quick fix was to move a few 'core' DCs to the top level. This allowed ISW to find the DC's to talk to. I'm still having issues getting it work with the GC correctly for some reason. My ticket has been open with SUN for over a month now.


More to come I'm sure...


Thursday, May 22, 2008

HP's Array-based Replication Cookbook

Found the following article on the Wikibon storage portal today.


HP's Array-based Replication Cookbook


It came out in December of 2007, but it provides a great overview of how a potential oracle 11G with ASM (booooooo) and HP EVA8x00 arrays would work.


From the PDF:



This white paper provides a comprehensive set of test-proven best practices for properly configuring, deploying, and operating an Oracle 11g database with Oracle’s Automatic Storage Manager (ASM) on an HP Enterprise Virtual Array (EVA), using Continuous Access (CA) as the remote copy infrastructure.



It's a good read, even if you're not an EVA or ASM person. It's a revist of the same topic they did for 10G/ASM/8x00. Nice to see them update it, although they seem to have forgotten to update some of the graphics...




200805221033.jpg


Ooops... Should be 11G RAC.


Rijndael animation...

So if you ever wanted a a graphical representation of how Rijndael works, want no more.


Formaestudio.com hosts a great rijndael inspector animation.


It walks through the encryption process step by step.


Thursday, May 8, 2008

Playing with simple logo ideas...

So I need a logo. Well, not a logo so much as a graphic for my simple website. I'm playing with "Art Text" a neat app for making text into logo-esc graphics.


So far, I'm toying with:




200805081140.jpg


or




200805081143.jpg


or




200805081147.jpg


Perhaps I'd do it in a green glowing font to look like an old-school terminal, which is kind of the effect I want. All these are in blue simply because blue is my favorite color.


Tuesday, April 15, 2008

Interesting Reading: the Dead Sea effect

Came across an interesting blog post tiled: The Wetware Crisis: the Dead Sea effect


It's an interesting post using the Dead Sea and it's evaporation water and concentration of salt as a metaphor of certain IT environments. The evaporation was the 'good' iIT people leaving and the salt was the 'less capable' IT staff staying. The net effect being eventually it will become too salty to support life (a well functioning IT environment). It was interesting to me because I've been in those situations and have watched it happen. Talented person after talented person leaves and what's left behind is a group of people who can't get a project across the finish line. Or if they do, it's way over budget, over time, under featured and under tested.


The comments on slashdot and in the blog post itself are also worth browsing. There's some interesting nuggets. Like this one from wjaf who takes exception to being called residue, since he's one who stayed behind and considers himself water. Well, the article never said the sea was dry and all salt. And by sticking around and "...we work our backsides off keeping the company afloat." he's enabled the problem to continue. A comment by Will.Rubin is also off the mark a bit. Mr Webster responds quite well to his comment, I feel.


One thing that strikes me funny is is notion of "TEPES" - Talent, Education, Professionalism, Experience, and Skill. When I read that, i read TERPES (an inserted R). One of the many local sports teams around here are the Univ. of Maryland Terrapins, abreviated as Terps. And then Terpes sends me down a childish rhyming path.


Nothing in IT is every this cut and dried. Nor is a salt-free IT shop possible. So it's important to remember it's an analogy. It's used as an allegory about IT hiring and talent retention.


Thursday, April 3, 2008

The Golden Answer for Consulting...

In the pre-consulting days I usually had a strong opinion about how, what, why and when things should be done. But lately I'm starting to find it's the same answer to every question. This answer is amazing in it's effectiveness and simplicity. It also really helps me to ensure I'm giving the customer what they want. That answer?


It Depends.


Just that simple. Should we go with lots of low end servers, or fewer mid-range/high end servers? Should we use tape backup or replication to migrate the data? HP or IBM? Solaris, AIX or Linux?


It Depends.


It works so wonderfully because it makes people explain why they're making the choices they're making. They never want to simply know if they should go with choice A or choice B. They could flip a coin, have a google-fight or pick from a Gartner quadrant if the choices were that that easy. But things are never that easy, it depends. It depends on a lot of factors and their not all weighted equally and they're weighted differently by different stake holders. And just to make things even more fun, consultants have their own personal preferences and biases.


It's the conversation that ensues that's the real answer. The conversation where you find out what's driving the choices. The conversation is where you separate the must-haves from the nice-to-haves. The conversation is what makes or breaks things. So is it an oversimplification to say that Consulting is the art of conversation?


It Depends.


Wednesday, April 2, 2008

Oracle's VM


Well, it had to happen. Oracle is jumping into the VM/Hypervisor arena. Have they learned nothing from Unbreakable Linux? Yeah, sure, you can say you have a virtualization strategy, but where's the value you offer?



http://www.oracle.com/technologies/virtualization/index.html






They simply put an oracle sticker on Zen and offer oracle support instead of Zen. I guess if you like the single neck to choke approach it could have value. And DBA's do tend to be the most kool-aid drinking group of users, so maybe it will have value, but I'm of the same mind as the Storagezilla blog. And to sum that it's HUH???






Grid services/utility computing is going to take off... in 10 years.

*NOTE*I originally wrote this Sunday - May 15, 2005


Some thoughts on grid and utility computing....


I think grid and utility computing is going to take off. In ten years. The pieces are starting to come out. They just don't knit tightly together yet. Also, people have to be convinced that this will work, it's secure and it'll save them money.


For example, in the solaris world today, if someone had a solaris 10 N1/grid/container/zone/what-ever-name-the-marketing-folks-dream-up-next world, you'd take a relatively low end machine, install solaris 10 on it, create a zone with zoneadm, install your software into the zone, test and validate the configuration and move that zone onto the grid. On the grid the end users would access what ever web server, app sever or database server you had running in it. Getting resource bound? Pay the grid owner to add additional resources of what ever you need. Maybe your already consuming the entire V890 you're running on, so you take a maintenance window to shut down your zone, deport the diskgroup (veritas lingo) and import the DG on a bigger piece of iron. Presto, no re-validation on the new hardware, no missing conf file or cron job that people forgot to tell you about. Because it's all in the container/zone it won't get lost. Because the hardware is abstracted, it's guaranteed to work. No risk! This can be done today, but no one, that I'm aware of, is selling "grid" services. I'm sure some one would do it for you, but today you'd probably be their only customer and everything would be a custom job just for you. So they'd bake in the growth costs into their fees and it probably wouldn't be any cheaper than doing it yourself, today.


Now, here's where the future starts the get real interesting. In the short term, I seep companies making their own production grids. Out of a cluster of mid-to-high end severs, say 2900's or what not. They use Sun or Veritas clustering to manage who runs where. This should happen in a year or two. This is where someone will see the cash to be made by running one gigantic grid for everyone to outsource to. But before that can happen, the security of grids needs to be improved and rigorously proven. The next piece is being able to move a container from one box to another with out shutting it down. Veritas is going to have this "real soon" I believe. Once their "UpScale" product, which they have working, is released you will be able to do just that. The next piece comes from Sun. Sun needs to make the servers meld into one seemless machine no matter how many actual boxes you have. Some of this is close already, but there's a long way to go. I'd say Solaris 11 at best. That feels like 2008 to 2010 to me right now. There is a big question if Sun can hold on that long.


Do I see this happening in the Linux space? Yeah, but only after Sun or someone else does it first. It' be foolish for the linux community do invest any significant effort into this idea until it starts to prove a viable model. Otherwise you tie up the kernel with lots of code that may serve no purpose.


Just a thought,


Rich


Long Tail Business Modles

Today I read an article on businessshrink.biz: http://businessshrink.biz/psychologyofbusiness/2007/10/02/internet-business-is-killing-the-8020-rule/ and it got me thinking about it a bit more. I don't really think it applies to the internet the same way it might to the brick and mortar world. It was interesting to learn about Amazon's success with obscure book titles. I guess the real link between the 80-20 rule and the web is it finally makes it possible to make money on the 20 rather than the 80, which allows you to stick a little bit more to principle rather than cow-towing to the masses.


Software as a Service

Just some ramblings about SaaS


I saw a job oportuinity for a company that had a requirement for being in a SaaS environment. SaaS? Never heard of it. Well, turns out I have, just didn't have a name for it. Software as a Service the 'wetlands' spin on the 'swamp' of ASP. I like the idea of calling it SaaS. It blends nicely into the idea of a SOA enterprise architecture. If the EA is done right you should be able to add and remove your own SOA components and SaaS providers. Where things get interesting in the SaaS space to me is the question of data ownership, data security and data transportability.


If I'm a provider, ownership is a simple ToS in the contract. Security makes or breaks me. Transportability, the ability to move data from my company to that of my competitor or bring it in house, is a double edged sword, with the sharper edge pointed towards me. It cuts me in that it allows my clients to easily jump to a provider with a lower TCO. It gives them leverage to threaten to leave. It cuts my compeitition and brings revenue when I use it to take customers away from my competition (even when the competition is the customer's in house staff) and it has a potential for revenue by providing professional services engagements to do the migrations of data between systems. It's the classic line for Zoro about a sword. It's like a bird, hold it to tight and it dies. Hold it too loose and it flies away. It also means you need to compete on innovation. You have to be faster, cheaper or better to win business. With new startups coming around every day, it means you have to keep a full stable of developers and are always preparing for 'the next big release'. It's a fun environment to work in, but it also has a burn out rate.


If I'm the client, data ownership needs to be crystal clear up front. I know where were some CRM type SaaS providers where the data belonged to the provider, not the client. Leaving them meant they took ownership of your data. You could take a copy of I'm sure, but they had a copy too, which they could do lord knows what with. Security needs to be proven. Independant audit reports and certifications are crucial. I need to know an insider in the company isn't going to sell my data, what ever it may be, to my competitors or the press. The data portability is a pretty big issue. I need to be able to easily move to another provider if better oportunities present themselves, the vendor goes out of business or is acquired. I need to be able to move it quickly to limit disruptions to my end users and to provide continuity.


I think as high bandwidth connectivity becomes cheaper and faster the SaaS market will continue to grow and be very vibrant. What I primarily see is niche or small segment SaaS providers so far. Recruiting/HR, CRM, desktop apps and the like doing well. If continues to prove itself, eventually larger segments of back office software services will be provided until the whole data-center is external. Another interesting mash-up will be when SaaS vendors start melding with on-sight managed service and outsource providers. They could then begin to provide ala carte plans, leaving customers to pick in choose what services they want to keep in house and what they want to just pay someone else to deal with.


New (to me) security setting in Sol 10

I'm installing Solaris 10 Update 4 onto a VM on my macbook pro. Done it a bunch of times, sol 9, sol 10, opensolaris, etc. This time I noticed a different screen in the install process:





Picture%201.png



It now asks you for the initial security posture. So basically I'll have only SSH on by default. Not too shabby! Not sure how long this has been in Solaris 10, but it's a welcome change.