Thursday, December 31, 2009

Getting there from here

I've started this rambling post four or five times. Deleted it every time. So I've decided to take a different tact. Rather than writing a long post that says everything, I'm going to write pieces of it. This way I'll actually get something written. As the Japanese proverb goes "Vision without action is a daydream. Action without vision is a nightmare.” All too often I've run into folks who are long on vision but short on action. A few times I've been in organizations that are restructuring to better cope with the current environment. The ever present 're-org'. One of those was the transformation into a service delivery organization. Which was a good idea and there was good vision behind it. The action is where the idea died and cost people their jobs.

I've had this picture hanging around for a few years. I stole it from an issue of eWeek. It was from one of those articles that is really nothing more than a advertisement in essay form.
Cci00001

This picture does a pretty good job at explaining what I mean. Although it's not the 'buzz word' it used to be, being a 'service based organization' was the goal of a lot of IT organizations. On paper it looks great. It can be an effective way to run an organization. Unfortunately the trick is getting from where you are to where you want to be. It's been all to common for organizations to 'green field' the new way of doing things and make a sweeping change to transform into the desired structure in the shortest possible time frame. And it's usually a disaster for the first two quarters. There's a lot of uncertainty on how things get done or who does them. Process bottlenecks creep up everywhere. There's inconsistency in implementation between teams. And while all of this is going on, real work needs to get done to keep the business going. After a while people start to revert to the old way of doing things or a hodge-podge in between the old and new.

 

The step that gets missed is the transition and how much transition can be achieved in one fell swoop. If you're currently a 'turmoil' or 'reactive' organization and you want to be a service-based organization, it's unrealistic to jump right to the end state. With out learning the lessons that come with being reactive, it's difficult to be proactive. If an organization doesn't have a solid proactive foundation, it can never be service based. Worse yet, there are budgetary considerations that go along with crossing over from one level to another. Software and hardware tools are often needed to achieve the desired state. Although often over looked in the planning stages, it's possible to make up that budgetary gap. Another gap that's overlooked is the people side of things. I have never seen an organization budget staff time and overhead to these types of changes. It's always expected to be done in the margins after a one or two hour 'training course' that typically just reads the new process aloud to everyone in attendance. No attention is paid to how to get the staff to the end goal. No real-world examples provided for how things should work. No governing authority to turn to for guidance. No one to find parts of the organization that our floundering in the new process/structure and pitch in and help them through it. Proud in their new organizational structure and plan, leadership pass it down the chain, with implementation left as an exercise to the reader.

 

So as I try to manage my team, I've tried to utilize some of the failure lessons I've learned. I don't make broad sweeping changes if avoidable. There always needs to be a balance of course, you don't want to make hundreds of small course corrections when a few larger ones will be as effective, but I lean towards the smaller changes. I plan in the overhead. If I'm going to add new processes or procedures to my staff's duties, I adjust time expectations accordingly. An example would be our post-mortems on outages. I wanted to change how that was done. It should be a 30 minute meeting, but because people were new to it, the first few where schedule for 60 to 90 minutes and we brought in lunch. Walk everyone through the new process a few times. Going back to the post-mortem example, we talked openly about the process and the actual problem in the same context. Giving people a new process with out a concrete example to work with leaves things to interpretation, and you'll get as many interpretations as you have staff members. By walking through it a few times with everyone they all here the same questions and my answers to those questions. It's not perfect or without flaws, but it seems to be working.

 

google's don't ask don't tell install problem...

Yeah, yeah, I know... nobody but me seems to care. Anyway, it seems google has decided to update 'Google Voice and Video" to version 1.0.19.1554 and "Google Software Update: to version 1.0.7.1306. What's interesting about the 2nd one is what happens now when you try to launch the old "Google Update" application:

Finderscreensnapz001

You're greeted with this:

Google updaterscreensnapz001

Only one option "Get More Google Software" which isn't what I asked for, I asked for Google updates. Ok, no problem, I'll just quit the app:

Google updaterscreensnapz002

Or maybe I won't since they've taken that choice (and the 'about' option so I can't tell they've updated it with out my knowledge perhaps). Oh well, lets take the only path given to me and lets "Get More Google Software"

Securityagentscreensnapz002

Oh fun! Three new windows. One expected, two not. The google web page with mac software, a dialog box because finder needs permissions, and an disconcerting 'preparing to move to trash' dialog box. I just wanted to see if I was running the latest version of google earth and now I'm given this? Worse yet, is there's not much context for the permissions request:

Securityagentscreensnapz003

As before, it's not so much about the actual changes google has made to my machines, but the lack of permission they sought from me to do it. This one is particularly troublesome for me because they don't really give me any choice but to take their path and that path isn't clearly explained in their dialog boxes.

Tuesday, July 21, 2009

My first run-in with DotHIll storage

So I'm having my first run-in with a DotHill Array (rebranded as an HP product). The HP model number is MSA2012fc but the DotHill number would be a 2730. It's your typical looking 3U 12 disk array. It has two controllers and two 4Gb uplinks per controller. Not too bad. It only does whole-disk raid sets however so it'd be a little silly to plug it into a SAN switch, but it can done. The HP web interface is pretty straight forward and has simple concepts, you create a vdisk, which is a raid set of drives and then you carve off chunks of that vdisk to present as LUNs to the hosts. Similar to the old school Clariion's, it has the notion of assigning a lun to a particular controller. So you have to manually/mentally balance your workload. It does provide some fairly comprehensive performance stats which can help in that regard and there's a command line interface w00t!. So far my results with these devices hasn't been good at all.

First off, I should point of we have two of these. One works, one doesn't. On the one that works, when I run a simple Bonnie++ test I get results like this:

Version 1.95 Sequential Output Sequential Input Random
Concurrency 1 Per Chr Block Rewrite Per Chr Block Seeks
OS RHEL 5.3 K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
size 118G 749 99 172335 42 66940 15 1518 59 199006 18 160.0 24
Latency 11964us 25145ms 3660ms 346ms 571ms 45881us
Version 1.95 Sequential Create Random Create
Create Read Delete Create Read Delete
/sec %CPU /sec %CPU /sec %CPU /sec %CPU /sec %CPU /sec %CPU
28611 66 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
Latency 13800us 108us 1157us 1328us 13us 1145us


Not very good at all, especially considering it has 4GB of cache and this is the only test running. I mean 25145ms worth of latency for the block output? Yikes!

On the second one, it seems to work ok. I can do some "dd if=/dev/random of=somefile" of arbitrary size and it happily chugs along. But if you try to run a Bonnie++ run against it you get a lot of errors:

Jul 21 10:09:36 oracle-dev-02 kernel: : exe="?" (sauid=81, hostname=?, addr=?, terminal=?)'
Jul 21 11:00:50 oracle-dev-02 kernel: sd 0:0:0:1: SCSI error: return code = 0x08000002
Jul 21 11:00:50 oracle-dev-02 kernel: sda: Current: sense key: Aborted Command
Jul 21 11:00:50 oracle-dev-02 kernel:     Add. Sense: Scsi parity error
Jul 21 11:00:50 oracle-dev-02 kernel:
Jul 21 11:00:50 oracle-dev-02 kernel: end_request: I/O error, dev sda, sector 934774511
Jul 21 11:00:50 oracle-dev-02 kernel: Buffer I/O error on device sda1, logical block 116846806
Jul 21 11:00:50 oracle-dev-02 kernel: lost page write due to I/O error on sda1
Jul 21 11:00:50 oracle-dev-02 kernel: Buffer I/O error on device sda1, logical block 116846807
Jul 21 11:00:50 oracle-dev-02 kernel: lost page write due to I/O error on sda1
Jul 21 11:00:50 oracle-dev-02 kernel: Buffer I/O error on device sda1, logical block 116846808
Jul 21 11:00:50 oracle-dev-02 kernel: lost page write due to I/O error on sda1
Jul 21 11:00:50 oracle-dev-02 kernel: Buffer I/O error on device sda1, logical block 116846809
Jul 21 11:00:50 oracle-dev-02 kernel: lost page write due to I/O error on sda1
Jul 21 11:00:50 oracle-dev-02 kernel: Buffer I/O error on device sda1, logical block 116846810
Jul 21 11:00:50 oracle-dev-02 kernel: lost page write due to I/O error on sda1
Jul 21 11:00:50 oracle-dev-02 kernel: Buffer I/O error on device sda1, logical block 116846811
Jul 21 11:00:50 oracle-dev-02 kernel: lost page write due to I/O error on sda1
Jul 21 11:00:50 oracle-dev-02 kernel: Buffer I/O error on device sda1, logical block 116846812
Jul 21 11:00:50 oracle-dev-02 kernel: lost page write due to I/O error on sda1
Jul 21 11:00:50 oracle-dev-02 kernel: Buffer I/O error on device sda1, logical block 116846813
Jul 21 11:00:50 oracle-dev-02 kernel: lost page write due to I/O error on sda1
Jul 21 11:00:50 oracle-dev-02 kernel: Buffer I/O error on device sda1, logical block 116846814
Jul 21 11:00:50 oracle-dev-02 kernel: lost page write due to I/O error on sda1
Jul 21 11:00:50 oracle-dev-02 kernel: Buffer I/O error on device sda1, logical block 116846815
Jul 21 11:00:50 oracle-dev-02 kernel: lost page write due to I/O error on sda1
Jul 21 11:00:50 oracle-dev-02 kernel: Aborting journal on device sda1.
Jul 21 11:00:50 oracle-dev-02 kernel: __journal_remove_journal_head: freeing b_committed_data
Jul 21 11:00:50 oracle-dev-02 last message repeated 5 times

Wee fun! Even better is it corrupts the disk beyond recognition. You can't even fsck it in a reasonable about of time. It's faster to reformat. We've had HP on site and opened a a case or two on this but so far have gotten no where. For me the biggest issue is I don't have enough time to sit on hold with the call center while they find someone who knows how to use linux with an MSA.

So far I've tried swapping HBA's, cables, ports, fibers. Updating all the firmware and drivers (the MSA, the HBA, OS, etc). I've tried different versions of RHEL 5.3 and 5.2. I've tried only using the HP supplied drivers, firmware and utils. All with the same results. Later this week, I'm going to give in and install win2k3 or win2k8 and run something like Bst5 or iozone and hope I can reproduce the error. Under low loads it doesn't error out. It performs poorly but doesn't error out. I hate these kinds of problems. There's obviously something wrong with the array, since one works and the other doesn't, but it passes all the diags. At some point this thing's going to end up like the printer in Office Space.

Friday, July 10, 2009

Getting back to linux...

Back in my university days I was all about Linux. My first 'machine' was a 386sx, probably 16Mhz or so and booted of a 5.25" floppy. Having to compile the kernel every time you wanted to make any kind of changes and then 'rawrite' it out of the floppies. And forget about package management (well until slackware for me...) My first 'workstation/server' that I seriously used, scuba.uwsuper.edu, was a 486DX-50 with a Cirix CPU around 1992. I think it might have had 256MB of Ram and a 80gb Segate drive (3.5" form factor no less!). I think some archive.org listings for the web pages I used to host on it are still around, although from near the end of my use of it. http://web.archive.org/web/*/scuba.uwsuper.edu Good times...


Then I moved to the DC area and started working with Sun and AIX hardware. Linux moved to a novelty/side item for me. RedHat, back when you could run it and not pay for it if you didn't want to. I'd have a 2nd PC in my office, mostly to act as my X server for working with the Sun boxes more than anything else. At Convergys and Red Cross, we had linux. A fair bit of it too, but in most cases it was never the 'core' of the product/platform offerings.


Well my current job with StreamSage is primarily a Linux shop, in particular a RedHat shop (Comcast, the corporate parent is a large RedHat customer). So it's been an interesting time getting back into the swing of things. On the one hand, I really like getting back into the linux state of mind. On the other hand, I've really come to appreciate the work that has been done in AIX and Solaris in terms of hardware management, diagnostics and configuration. I mean there are Linux equivalents in a lot of cases and a lot of it is an artifact of the hardware and software being built by the same people but boy, I miss the AIX and Solaris troubleshooting tools.


The unpredictable future and buying hardware...

At my previous clients site, they have some old Sun servers that they are upgrading to M5000's. The hardware they evaluated (and they did very rigorous testing of the hardware) was a T5240, T5440 and M5000. The M5000 was configured with 4 processors not the full 8 that are possible, which is the subject of my post. When choosing the M-series server they decided to go with the M5000 because it would have 2 free slots allowing them to add memory or processors later. So they are trying to protect themselves against a CPU utilization problem down the road by having slots to put additional capacity in. I've been down this road a few times myself. I bought the 880s and 890s with 4 procs just in case we needed the other 4 down the road. Unfortunately most of the time I never needed those slots. and wasted the rack space, power and cooling. In my current clients case they should probably go with the M4000 instead.


On a list price comparison over five years you get:


M4000 $66,380 M5000 $81,880


Maintenance (numbers are swags for platinum pre-paid for 3 years):


M4000 $17,000 M5000 $22,000


I'm typically a Veritas user, so that adds complexity. Last time I looked (a year and a half ago) the M4000 was a tier E and the M5000 was a tier H. So SF for oracle for both would be:


M4000 $4,000 M5000 $9,000


Veritas Maintenance would be somewhere around (swag, 3 years):


M4000 $2,400 M5000 $2,700


The rest a bit of a wash, and brings us to:


M4000 $89,760 M5000 $115,580.


So the M5000 which has two advantages: 4 internal drives (no value unless you're partitioning) and 2 expansion slots (potential future value) has a 28% greater price premium over the M400. The reason engineers like myself make choices like this the unpredictable future. Often times when I'm asked to spec out hardware for an application I'm given initial requirements like 12,000 total users, 300 users concurrently. And if I'm lucky some information about the resource utilization associated with each user session. Most the time it's a shot in the dark however, and have to dig around for similar usage profiles via google and try to work that into my sizing model. But that's relatively straight forward. There's some art and finesse to it but at the end of the day it usually comes down to a derivative formula of X sessions * Y-Mb-per-Session + overhead + wiggle-room = ZGb of memory. Same kind of thing for CPU and I/O. Where it gets hard is when you have to forecast the life of the machine. You're forced to try and pick a machine that will meet the needs of not only year one but years two through four or five as well. When we ask the customer what their growth rate is they'll usually shrug and give a non-answer. Or they'll give you an answer that's based directly on other non-knowable facts like "our user base will increase at the same percentage as our market share". Great. Thanks for that. It's very tempting to just go out and buy the top of the line server to ensure we never get a resource problem. Buy a tour bus when all we need is a passenger van. But when they see the sticker price of that tour bus, we're usually back to the drawing board. That's what makes machines like the M5000 or the 890 it replaced so appealing. It has room for an extra row of seats in case the number of passengers we need increases drastically. Unfortunately you have to pay extra fuel costs to hall that extra space around (maintenances) and there's the up-front acquisition costs as well.


It's all about going back to the well. The reason we over-build or infrastructure this way is because of the difficulty in going back to the well for additional funding. In my work in the non-profit space there's a real risk of that well being dry as well. For example, I could buy the M4000 and then if I have problems in year 2 or 3 I would then do a forklift upgrade to the M5000 (swap the boot disks and away I go). Easy stuff, except I have to actually buy that M5000. Which comes with lots of questions: Why didn't you buy an M5000 from the start? Why were your forecasts wrong? Where do you think we can come up with that kind of money? Collective amnesia will shift all the blame to the people who spec'ed out the system. Blame rolls down hill. It picks up mass and speed as it rolls and engineers are usually at the bottom of the hill with the operations folks (often one and the same). So we buy machines that have that 'extra reserve' built in. In high end servers you can usually turn on the additional capacity by purchasing a license key. But in the mid-to-low end range we're only offered machines with expandability. So if our forecast is off or the conditions change, we're able to bring a lower incremental cost to the table to gain additional performance and capacity. Unfortunately for me however, I have rarely needed that expanded capacity. I can only remember two examples one success, were we added two boards (4 CPUs + memory) to an 890 and one case where they didn't make the versions of the board we currently had in the server which meant we would have had to replace all the boards which would have cost almost as much as replacing the server outright.


I used to be a 'keep something in reserve' kind of engineer. Be able to put the rabbit out of the hat to meet the increased demand that we didn't know was coming. Basically pull of a Montgomery Scott to save the day. By doing so however, I have enabled the behavior that has gotten me here in the first place. By not purchasing the equipment the requirements suggest and adding some reserve "just in case" the cycle repeats itself. Now I'm not going to purchase the minimum needed to meet the requirements given to me (however flawed they may be), but I am going to start putting the decision back on the requesters and have them make the choice. In writing. With as much concurrence as can be achieved from the project team as a whole. So if I were to travel back in time to the period before the aforementioned M5000s were purchased. I would offer the M4000s instead. I would tell them you save X dollars up front. Your downside risk is you may have to replace this server if your usage or growth models are wrong. And, perhaps most importantly, get documented concurrence from the stake holders.


This has turned into a much longer post than I had originally intended... phew. Now onto my next client/project.


Tuesday, June 16, 2009

The writings on the wall it seems...

According to a story in the Register, sun's next gen server chip "Rock" has been killed. The corresponding servers are also dead. The Sparc T series is still alive and generating revenue and Fujitsu is still pumping out Sparc64, so the future isn't dire, but I think it's fairly clear that SPARC as a chipset just went on the endangered species list. On the other hand, like they speculate in the articles, maybe it wasn't going to work technically or maybe they're cleaning house before the take over. Either way, the 'next generation' sparc is no more.


Wednesday, June 3, 2009

It's a rough job market, in pictures...

Great blog post today on The O'Reilly Radar. It graphs the number of online job postings month by month for the last four years. I won't steal the pictures here, you'll have to click the link to see them, but it confirms what I've been seeing while doing my job hunting. It looks particularly bad for the folks back home in Minnesota. A slight downward trend, with a slight uptick now. Although I have a feeling it will ramp up a bit more for June and July as more government spending trickles out of the stimulus bill. Personally I went the month of March and April with very little job oportunties. Late April and May it picked up slightly (and I landed a short term contract). Now here in June, no fewer than 5 promising oportunites have come my way. It figures, as soon as I take a long term job (W2, not 1099 or c2c) that other options come out of the woodwork. Extensions and increased hours at me current contract. A data center move with CSC (love those, I'm good at it and it has a set end date). A systems engineering lead with the direct competition of the company I'll be working for on Monday (slightly more senior role to boot... DOH!). A data center manager job... SAN/VMWare engineer... All promising, but I won't be pursuing any of them. I've placed my bets and am going to ride it out for a minimum of 6 months. More likely a year or more. My strategy/thinking there is an idea for another post.


Anyway, good luck out there...


Wednesday, May 27, 2009

I love catching websites with their proverbial pants down...

I love unhandled errors from websites. Today, while submitting my timesheet for the day I received the following:


Microsoft OLE DB Provider for SQL Server error '80040e14'


Incorrect syntax near the keyword 'And'.


/EZM/traddfreets.asp, line 752


It tells me that I'll always have a job in the future. I waited a few minutes and then tried to re-submit my time and got:


Microsoft OLE DB Provider for SQL Server error '80040e21'


Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done.


/EZM/traddfreets.asp, line 563


These, in particular seems 'frog march' worthy. What this tells me is they have an error in production and are trying to fix it IN PRODUCTION! That's a No-No in my book. I understand the desire to return to service, but what if your 'fix' inserts bad data? What if it corrupts? What if it gives access to protected data?


And worse still, The only thing I get is the SQL error. I get no 'sorry' page, or link to go back to the time sheet, no 'handling' of the situation. It's easy to see how this happens. All to often the focus is on the postitive-outcome side of things. Given A then do B, C and D and then return E. There's usually error checking along the way, which is half of the negative-outcome side of things. I often see errors and stack tracing output, some of if boiler plate from the underlying components like the ones above, others written by the website folks in question. It reminds me of the Seinfeld episode where Jerry is trying to pick up his rental car and there's no car available (paraphrasing). "But I have a reservation for a car." "I know what a reservation is sir." "I don't think you do, because if you did, I'd have a car right now. Anyone can take a reservation. It's the holding of the reservation...that's really the most important part!" (this bit gets used a lot by a lot of people judging by the google hits). In this case, they know how to 'Throw' the exception. It's the catching, the catching is the most important part.


Oh well, the silver lining is job security.


Friday, May 22, 2009

Google Apps, custom domains and the G1

I have a few domains for my personal and professional use. When I first started my personal website/domain I took what ever came from the hosting company. In my case, www.powweb.com is the provider in question. I've been generally happy with Powweb, but I also have straight forward run-of-the-mill needs too. The only thing that was a problem for me was email. I needed to be able to read my mail from multiple machines. Your typical provider only offers POP3 access and webmail. Webmail just doesn't work for me and multiple mail clients with POP3 is problematic. My initial solution was to use gmail to retrieve the mail via POP3. Then via GMAIL I can use their advanced webmail client and desktop clients via IMAP. I probably would have gone with Yahoo Mail had they offered IMAP for free. Don't know why, but I like Yahoo's webmail better for some reason.   


Gmail isn't with out it's problems though. When you send email via Gmail from your @gmail.com account it has a funny header that some MTA's don't like to honor. So you'll email will have a header lines:


Sender: user@gmail.com



Return-Path: <user@gmail.com>


but a from line of:



From: Richard Whiffen <me@whiffen.org>


In the past it used to say


From: user@gmail.com on behalf of user@whiffen.org


which was even worse. Anyway, some older mail apps reply to the @gmail.com. This becomes a problem when the reply is to a large group of people. The threads get fragmented because some people are mailing to @gmail.com some are @whiffen.org. Doesn't happen often, but enough to be a bother.   


The fix is to sign up for google apps. If you have less than 50 mail boxes (not aliases, actual email boxes) the free edition is quite powerful. When you sign up you can either buy a domain via Google, or you can use a domain you already own. I think it'd likely be cheaper via someone other than google but your milage may vary. I already had my domain so I signed up and via some fairly simple steps was able to point my MX records from powweb.com to google.com. So now I can log and get a gmail interface to my @whiffen.org. Google's mail infrastructure and spam filtering is far more robust than my hosting provider so I have had a noticiable drop in spam since moving over. I also have an online calendar and google docs @whiffen.org as well. So now when I send email from @whiffen.org there's no funky headers or other issues like that. It does mean however now I have two mail boxes @gmail.com (which I only use to subscribe to listservs) and @whiffen.org (and @rwhiffen.com but that gets no traffic at all). In Mail.app and Outlook it's trivial to manage. It is a bit trickier from the web interface. I basically have to log in more than once.


Where it gets fun is my G1. I have a T-Mobile G1 the "google phone" if you will. When I bought it I signed in with my @gmail.com account and when I'd click the nice red gmail envelope I'd get my @gmail.com mail and my @whiffen.org mail. But sending mail as @whiffen.org wasn't possible on the phone with out extra work. The simplest path is to set up the Mail app within the phone (it's separate from the gmail app). It will allow you to connect to a pop3 or IMAP host. But with the google apps setup, I was able to factory reset my phone and instead of using @gmail.com I used @whiffen.org and it worked like a charm. Now I have a single interface to my @whiffen.org email via the phone, via the web and via my Macs. What's more, my calendar and calendar invites are now @whiffen.org.


The Google apps setup also comes with Google docs, which I'm using to co-write some documentation currently, very hand tool. It has fairly flexible version control and permissions structure. It can do fairly robust word processing and spreadsheets. I find the spreadsheet navigation a bit clumsy at times do the the web based nature of it. Data entry isn't as smooth as it is with a local application. I haven't tried the google gears feature for offline editing yet. But for the basics it's pretty good. I essentially use it to rough in the documents and the finish them up in NeoOffice or in MS office via parallels.


A hidden gem, I feel, is google sites. Google Sites is very similar to Microsoft's SharePoint. You have less widgets and flexibility perhaps but you do have a lot of base features. You can make a file cabinet page for simple file storage, retrieval and versioning. There's a dashboard template that lets you add google gadgets to the page, like weather, docs, excel sheets, movies, but is generally intended to give you a portal-like view into your other site pages. There's also a announcement template and list template. All together it's easy to see turning Google Sites into a small company portal for sharing information, which is what I believe it's intended use is. Although I do find it ironic that it's not tied into google docs. When you add things to your 'file cabinet' page, you have to find the URL's to your Google docs via the Google docs page and paste them in as a web link. You can't select them from a list. I suspect this will be improved over time, but I was a bit surprised by that lack of integration.


What will really make Google Apps interesting is when they get integrated into Android. Then your phone will be tied into this nexus as well giving you a lot of power from a phone. I'd love to be able to at least read my docs on my G1. That's one area that is sorely lacking in the current G1 and the forthcoming 1.5 version coming 'any day now'. It'll come some day, but probably not for a while. I think they want to get things like Adobe Flash working first.


If I ever decided to start a small business I'm definitely going to use Google Apps instead of running an email server and share point server. Especially when the first 50 email users are free. I do find it a bit odd that the 'premier' edition is $50/user/year, which for what you get, isn't too steep. That $50 gets you 25Gb of mail, a host of extra security options, a 99.9% uptime SLA, and more support. For a lot of small companies, especially the 10 or less people kind, it would be tough to justify the $50 vs free. The security options might make it work while, but other than that, why would you do it? I'd be curious to know how many freeloaders like myself are out there vs the less that 50 user paying customers there are.


I've looked at the 'live.com' offerings from Microsoft, and I was very disappointed. It's far to confusing initially. They're also trying to be everything web 2.0 all rolled into one. While the idea is reasonable, as with a lot of things Microsoft, the execution is poor. Once I figured out what they were trying to do it made a twisted kind of sense to me but it was still cluttered and confusing. They forgot one of simple things about the web 2.0 experience. Most things are separate by default and you have to choose to join them. Not with windows live, they linked and cross linked everything. I looked at it, acknowledged, and moved on.


Anyway, if you have your own domain I'd strongly suggest you give google Apps a try. It's great email hosting if nothing else. It's fast and free. It has extras that appeal to small businesses or groups. If you're one of the lucky ones who has a grand central account (now called google voice, apparently), you also have a central phone number for your business, again for free.


Saturday, April 25, 2009

Thoughts on Oracle buying Sun...

So Sun finally found a buyer. The IBM and Sun deal fell apart for various reasons (depending on who you ask) and now Oracle has swooped in and temporarily entered the hardware market. I've been writing this post in bits and pieces since I heard the news. The piece kept getting longer and longer and more unwieldy every day. So I've decided to break it up into three separate posts.

 

Over all I think it's a great deal for Sun. It remains to be seen how this works for Oracle. One interesting aspect of the acquisition to me, on a superficial level, is the logos.

 

Oracle Red 200904212212.jpg

Previewscreensnapz002

vs

 

Sun Blue

Previewscreensnapz001

For my money, Sun has the superior logo and color since switching from that fisher price purple. A Red Sun logo looks pretty sad, IMHO. Huh, Red Sun, has an interesting ring to it.

 

The Dead Pool, who dies in the Oracle purchase of Sun?

Since Sun's primarily a hardware company and Oracle's primarily software, you'd think there wouldn't be a lot of overlap between them, but you'd be wrong. There's going to be a few hard choices to make for Oracle. Some of them are easy. Sun's Java Systems Web and App (aka SunOne aka iPlanet, aka Netscape) products will be killed. They weren't that heavily used anyway so no one will really miss them. Oracle can offer them an easy transition. The area of Identity management is going to be a sticky one. The fusion stack has an integrated LDAP already, but my limited exposure to it has been unfavorable. Sun's LDAP and IDM products only slightly better in my view. Both have good and bad points. But in the end, you only need one. So it'll be interesting to see what happens there. I suspect that most of the Sun products will loose most of these fights. They just don't have the momentum or market presence to stay.


Then you get into the sore spot for a lot of people. The open source products and projects that Sun supports or owns. Oracle already has Oracle Developer Suite. Sun has Sun Studio and the fairly popular NetBeans. And developers are the most cool-aid drinking crowd you'll ever find. You think Linux and WIndows folks don't like each other, just put a NetBeans user in the same room as an Eclipse user and ask them which is the better IDE. There's no good answer here. You can't keep them all, although because of the open source nature of NetBeans there isn't a lot of expense to keeping it around. Sun Studio is probably already dead, we just don't know it yet. Then there's MySQL... Seems like chicken little just did this dance about a year ago when Sun bought them. Some folks think it's not as bad as it sounds and I think they're right. Oracle could actually bring some real value to MySQL. The open source issue would still be sticky, and they don't want to erode their sales, but there could be a fairly sizable chunk of middle ground for MySQL and Oracle. Other open source projects probably won't be so lucky. Glassfish is probably going to get dumped. Not a lot of upside for Oracle with Glassfish. Then there's Oracle Linux and OpenSolaris. Although there's probably room for both, strategically they should pick one. My wishful-thinking bet is on OpenSolaris. The tie-in's with the hardware line are too strong to ignore. Oracle Linux hasn't exactly lit up the server market anyway. But any way you slice it I'm expecting forks of a lot of Sun sponsored open source projects at the first sign of Oracle playing rough with the Open Source community.


Other interesting areas are going to be in the grid space. Sun and Oracle have competing grid technologies. Not sure who wins out in the merging of those products. I suspect Sun Cluster will be put out to pasture in favor of ClusterWare (which also portends bad things for Symantec's VCS product). Sun's N1 management suite will probably be left with out a chair when the music stops as well. It will be interesting to see where Oracle simply drops a product vs where they merge a product. In some cases the products have different strengths, and merging them makes a lot of sense. But by the same token you don't want to create market confusion in your product portfolio, so they have to walk a fine line.


So my software deal-pool list (dead as in not a Sun product/project anymore., forks/spin-offs not counted):



  • Sun Java Systems Web and Portal products


  • Suns IDM/SSO suite


  • Glassfish


  • Sun Cluster


  • Sun Studio


  • NetBeans (*sniff*)


I'm sure I'll amend this list with a few that I've overlooked later.



Oracle as a hardware vendor

It will be interesting to see how long the 'Sun' badge stays on the equipment. Right now I think the Sun name will stay on for 2 or 3 years or until Oracle exits the hardware game. I'm betting that they will start cutting hardware sooner, rather than later. Oracle loves fat maintenance contracts, so I could be wrong about them leaving the hardware game at all. There's a lot of recurring revenue to be had on the support contracts. But the Red Sun won't be able to compete in as many hardware arenas as it did in the past. I suspect underperforming lines will be culled quickly. I don't know what lines those may be. If, for example the blade servers, aren't selling like hot cakes (relatively speaking of course), then I think it'll be taken out behind the shed, old yeller style. Eventually I suspect all the server hardware will go this way, but that's just a speculative guess on my part. I know Oracle would love to have an App to Disk solution on hardware that no one else sells. Unfortunately it's no guaranty of success. If owning the whole solution where all it took to be successful, we'd all be running IBM and Apple.


Product areas I see getting killed off first are the blade systems. I could be mistaken but I don't think the blade idea has much steam. The barrier to entry for most companies is too high. Plus there's the 'eggs in one basket' problem that make a lot of systems architects and engineers shy away from blades. I wouldn't be surprised if they kill off the intel and AMD based systems as well. Let HP and Dell have that. Although they could try and keep it going to have that bundled solution. Systems preloaded with unbreakable linux or Solaris and preloaded with Oracle goodies as a value add. The first spin off will likely be StorageTek, which will likely kill the brand. IT history is full of brands that diminished under a new owner and then died after subsequent resale (Who uses wordperfect anymore?). The reason STK makes a a good spin-off candidate is there are other tape/storage vendors who would love to get their ends on their install base. The same can not be said for the x86 business or SPARC.


My current feeling is we'll see product streamlining first and then reductions/spin-offs. Mostly scalpel type cuts, but few major changes in the first 18 months. A lot will depend on the state of the IT industry. If IT purchasing is ramping up, Oracle will likely keep things going for 3 years or so. But if IT spending is in the tank or they botch the merging of the two support/sales organizations they'll go at the Sun hardware lines with a battle ax.


Friday, April 24, 2009

What I like about the Oracle purchase of Sun

I'm not quite sure what the 'vibe' on the street is (and by street, I don't mean wall street, they're like headless chickens riding a herd of sheep). So far I think reaction is mostly neutral or 'wait and see'. The open source community is up in arms fearing a mySQL death blow. I think that fear is unwarranted at this time, and I'm not alone in that thought. I think there could be some real upside to Sun customers, especially Sun/Oracle customers. Oracle brings financial stability and customer base to the table. Sun brings a recently invigorated streak of innovation and solid hardware creation & manufacturing skills.


It would be interesting to see what kind of oracle appliance the combined team could come up with. I have visions of an appliance based on the T5240 that auto-configs via the enterprise manager grid control. It woud ship pre-installed with RAC and the Fusion stack. You plug it into the appropriate VLANs, it attaches to the grid and then you tell it what kind of work you want it to do. It'd have 128 threads, a good mix of disk and memory. If you're starting to run out of capacity, just add another pre-packaged building block to the grid. The devils always in the details, but I think it could work. It'd be a bear to patch/upgrade unless it was built into the process to automatically segment the grid into upgraded vs non-upgraded and once a threshold of nodes are in the upgraded status, switch traffic to those nodes en mass. Because no one else builds a T2 based server, you'd have to get all your kit from Oracle which could drive future revenue. If they make the entry point easy enough and the OS/Hardware simple enough, smaller businesses could run oracle products than before.


Oracle (which owns most of the products I encounter in my professional life these days) could optimize and integrate the Solaris kernel to provide added speed and flexibility on Sun hardware. Especially in regards to performance and scalability. If Sun can optimize it's kernel for the Oracle RDBMS or Java for the Fusion middleware stack or Weblogic with out customizing the respective products, it could be a big win for Oracle. I think the trick is going to be to not customize the products. If they do, they open themselves up to the same anti-trust talk that Microsoft faced. We're a long way away from that, but they wouldn't want to give their competition any ammunition. There was a time when the VOS (Veritas, Oracle Sun) stack was the way to go if you needed to scale big. They even created a joint center for a while to work on issues up and down the stack (not sure where that ever ended up). But there's only so much a joint operation can do, because patents and IP rights get in the way. The barrier between two of those three has just been removed (and the 3rd isn't as vital as it once was).


The other area I like is the cutting edge areas. For example Sun's doing a lot with Flash storage and mixed flash/disk storage. What if you could optimize the database to take advantage of that mixed storage pool? Oracle and a native ZFS pool? And perhaps most interesting (and the one that gets me giddy) would be native DTrace providers for Oracle products. Imaging the diagnosis option if you can natively probe an underperforming SQL query? Probe a 'lost' tuxedo session? And then the gui's that could be written to take advantage of the providers? You'd never buy Spotlight again.


Perhaps the most important reason I like the purchase is Sun survives. I selfishly want Sun to live on because I consider myself to be quite good at Sun. I'm fair at AIX and Linux, but Sun and Solaris is in my wheelhouse. I'm not adverse to AIX or Linux, it's just not what I've had the most exposure to. If Solaris goes away, I guess I'd have to go with Linux because I can run that on just about any hardware. AIX requires me to buy something from IBM.


At any rate, I'm glad Solaris lives another day.


Thursday, April 2, 2009

I miss the rejection letter

A while back Louis CK was on Conan O'Brien. He had a great bit on "Everything's Amazing, Nobody's Happy." Basically how things are amazing right now and people still have room to complain. My cell phone has more abilities than the first five computers I ever owned combined. And yet I still find time to complain about what it can't do.


Similar thing with the job market. Searching for jobs has never been easier. In good job markets you can put a profile up on job boards and sit back and wait for people to find you. In tougher markets, you can search and apply from the comfort of your computer. No resumes to print, cover letters to write with awkward salutations, no envelopes or stamps. You apply, and 5 minutes later it can be in the hands of the HR department. It's amazing. You can sit in any internet connected location in the world and look for jobs in any major metropolitan city in the western world and then some. The 'web 1.0' way of doing it was a bit cold and non-personal, so sites like linkedin.com have stepped in to meet that need. (My profile's here) There are other avenues as well. EMC has a twitter account for jobs. It's never been easier for job seekers and employers to find each other.
As the Joe Walsh song says: "I can't complain, but some times I still do". I miss the rejection letter. I still get them every once and a while, but haven't gotten any on this latest round of job searching. And I think it's an artifact of living life at internet speeds. So I've sent resume's and inquiries to a handful of opportunities but haven't received any kind of response. Others I have recieved a canned response from the HR application they used to allow you to apply, or an auto-reply email from the HR@company.com email box. I guess I can forgive the lack of response. The flood of potential applicants and the number of dead ends therein would make it a fools errand to respond to them all personally. With the ease of applying for jobs, a person can apply to several dozen jobs in an afternoon. So the likely hood of someone in HR responding to someone who's already taken another job is pretty high. So I guess I shouldn't be surprised that HR doesn't get back to me personally. But it doesn't mean I have to like it either. Some times it would be nice to know that a real person at least received my application. It would be nicer still to know that I didn't get it so I wouldn't be left to wonder.
Oh well, even though everything is great, I'm not completely happy...

Thursday, March 26, 2009

A rambling post about twitter, Steve Case and Revolution Healthcare (mostly twitter though)

I've been on twitter for a little while now. And I've had a few thoughts running through my head but never enough to warrant a post, until now. I want to cast these thoughts into bytes for later mockery. I expect when I retire I'll come back and read these posts (in what ever format they evolve into) and make a lot of fun of myself. Kind of like when I look at pictures of myself as a teenager now. The "what were you thinking!" kind of stuff. This could be a long and winding post, apologies in advance. On with the dreck!


So I'm on twitter, mostly as an easy method to update my facebook status, because Ecto (my blog editor of choice) has an option to set your twitter status to notify of new blog posts and lastly (and perhaps secretly the real reason) the pure geeky-ness of it. For the most part, I follow people I know, but I do follow some non-people, like The American Red Cross and a few celebrity types like Dr Tiki or The Big O and Dukes radio show. Recently I got back on the Diggnation bandwagon and decided to follow Alex, Kevin and a few others. Kevin Rose tweeted a note about a new website he put up: wefollow.com which is a twitter directory that ranks by number of followers. Good idea that fills a certain need for folks. I used it to find other interesting people to follow like Tim O'Reilly, Leo LaPorte, Snoop Dogg and Steve Case. Initially I started following people like crazy. Before I new it I had added a few dozen people. I did this late at night so it seems like a great idea at the time. Then the next day started. And I started to drown in an avalanche of tweets. Some very interesting ones, like Tim O'Reilly's or funny like Christopher Walken. The irony of cwalken to me is I distinctly remember not laughing very much when he was on SNL a few years back. Go figure. Anyway, the rest of them were just pure clutter for me. When I finally got back to check my timeline on my phone, I had 375 tweets waiting for me. In less than 12 hours. DOH! So I started dropping people like wet socks off a cloths line.


Brief interlude. Back in 2007 after leaving the Red Cross I was contacted by someone from HR at Revolution Healthcare. They made it clear that it was a Steve Case venture. Basically using his name and reputation as a recruiting tool. At first I wasn't very keen on the idea. Didn't seem like a great fit. But the more I thought about it the more I liked the idea. Healthcare, and in particular health insurance needs a revolution or at the very least a 'market correction' in this country. Steve Case's name could open the doors to companies that would otherwise ignore a startup. The ratio of insurance company people to health care providers in amazing to me (even worse when you look at just Doctors). So I thought working for Revolution Healthcare might be worth a second look. If Steve Case could use his name recognition and access to other execs to open doors, the venture could be very successful. Turns out it a good thing the HR person never got back in touch with me (although I was peeved at the time) since they've laid off a lot of people. It was nice idea. Hopefully they can weather the storm and keep trying. I would have liked to have been a part of it.


Back to twitter. Steve Case. Started following him because he came up in the top-10 for #tech on wefollow.com (doesn't seem to be anywhere on wefollow.com now) and because revolution health care. Wanted to see what kind of exec he might have been. Well turns out he likes the finer things it seems. He's had some expensive tweets (well at least for a lower pay scale guy like me). Pics from Sunset in Captiva FL, Advice on where to stay in Maui. Starting a resort in Costa Rica. Then there's the @user replies to people I don't know. Kind of weird only seeing half of a conversation. But I kept with it for about a week. Today, can't take it anymore, have to 'unfollow' Steve Case. Not that he knows or cares mind you. I wouldn't in his place. But he wasn't saying anything that was particularly interesting to me, especially in the context of what I use twitter for.


Which leads me to another Twitter thought. Twitter needs a feed filter ability. Not sure I've thought this idea out completely but I need two twitter feeds. One of my 'a-list' people I follow. Close friends, businesses that announce stuff via twitter (like MacHeist, w00t or Red Cross) and the like. Then the B-list twitter where all these other folks can go. I could then follow up with their stuff when I have spare time or an interest. Maybe another way to look at it is 'push' the tweets from group A but I'll 'pull' the tweets from group B when I want to see what they had to say. I'm sure there's a better way to say what I mean here. And who knows, maybe you can already passively follow people and not have them clutter up your timeline and I just haven't figured it out yet.


Anyway, time to get back to the business at hand. Time to push 'publish' on this ramble and have it announce via twitter, go figure.


Thursday, March 19, 2009

IBM to buy Sun?

According to the WSJ IBM is in talks to purchase Sun. The MSNBC version of the same story provides a lot more detail. There's a bloomberg article that seems to be down as I write this that indicates that Sun asked HP they turned it down.


One thing that strikes me is that all of this is coming from one source, so there may not be much to this. This could be just a bunch of speculative talk. This same speculative talk also indicates that HP was asked and HP turned Sun down (which is a bit surprising because HP and SUN would be a better fit IMHO).


But if we pretend for a moment that it's all reasonably true some interesting things come to mind. The cross-country nature of the two companies will be a big issue. It's east coast vs west coast. It's suits and ties vs pony tails and sandals. IBM's cathedral approach to Suns bazaar approach. On top of that you can't have HQ's on both coasts. Which one goes? In the end, IBM will 'win out' eventually just out of shear entrenchment, in my opinion. There just isn't enough Sun left to change things. There is the possibility of it being a separate entity, similar to EMC/VMWare. That's likely how they'd start, but eventually they'd restructure under the IBM way of doing things.


Another interesting choice is what becomes of the hardware lines long term? Do they push SPARC out in favor of POWER? Do they do both? Short term they would try both I'm sure, but eventually they would have to drop one or the other. I suspect they would drop SPARC. The high-thread low-Mhz model of the T1 and T2 chips doesn't have as much traction as I think Sun or IBM would like. The POWER line on the other hand takes the same tact as the SPARC64's but with much higher clock rates. Perhaps a T1/2 and POWER lineup would be workable. But it's a tough choice. Neither CPU has the broad adoption of the X86 instruction set CPUs of AMD and Intel. I think in a merged company the sun x86 lines would be gradually dropped within a year in favor of the IBM setup. Hopefully a 'best of both' approach would be taken but I doubt it. As long as the OS's could be ported to both, any of the possible permutations and combinations would likely be workable, with the possible exception of keep everything as is.


The AIX vs Solaris issue would also be interesting. Sun's current sunset dates (if maintained) would have IBM support Solaris for up to 5 years after the decision to stop Solaris. If they decide to drop AIX they're faced with a similar issue (although IBM's EOL policy is a bit more cryptic). Either way you slice this one there are going to be a lot of hurt feelings. The AIX vs Solaris camps would be bitter enemies if it weren't for the common Microsoft enemy. My hunch is Solaris would be taken out back behind the woodshed and not come back. It would likely happen after all of the corporate structures are merged and aligned. Then Solaris would be stopped and open solaris would be the road forward for Solaris hold outs.


The storage area is the only area that seems straight forward to me. And is the only technology area that IBM and Sun can merge easily. Since neither company is storage leader (although IBM has a much larger market share) merging the product lines would be relatively simple (compared to OS or hardware). Neither company has a good story to tell in the mid-range market (although the 7000 line from sun is interesting, if not proven). The high end market is dwindling and I think both companies will not invest a lot in that space and leave it to Hitachi, EMC and NetApp (I exclude HP because the XP is pretty much a HDS...). Combined they could also aggressively go after Dell's market share (and nibble at HP's). Dell is growing by virtue of selling tons of little storage, basically doing it on volume. And since the high-end Sun arrays are re-branded HDS gear, the just have to maintain the support infrastructure. Plus Sun has the better play in the emerging flash market and IP knowledge that would come with that. Add onto that the StorageTek side of the house and it's an idea with some promise. So far, this would be the easiest (but by no means simple) merger.


The other software assets could also bring some value to IBM. Mysql and Java would join IBM's already impressive web offerings. Add Sun's middle-tier and middleware offerings to that mix and you have a very expansive software portfolio. And then there's the Sun ports of Xen and purchase of virtual box. Both of these virtulization offerings would give IBM a small counter to the VMWare juggernaut. And then you can add Open/Star Office to the Lotus brand (when was the last time you saw a SmartSuite purchase?) for an enterprise desktop solution. After a bit of blue-washing the code base and essentially free access to the IBM patent portfolio these products could really take off. There would be a few orphans here and there, like NetBeans. Not sure where Netbeans fit's in a Rational world. But the software portfolios could be a great combined force.


So far analyst reviews are mixed but trending towards 'thumbs down' and I can't say I disagree. The short run would be dreadful and IBM would have to walk a delicate line to avoid loosing Sun's existing customer base and its massive and active development communities. If end users and developers are going to be forced to change, there's no assurance that they will change to IBM. If you have to go through the trouble, you might as well look at all the options. Well, so far it's just idle speculation. Time will tell where this ends up.


Monday, March 16, 2009

Sun may be on to something...

Because I always love a good blog-battle, especially in the storage space, I was reading the storage blogs again, this time focusing on the Sun Flash camp vs the EMC Flash camp. And since I am easily distracted by shiny things (it's amazing I finish anything I write here) I read some other posts by Adam Leventhal from the Fishworks team. He's posted some details of the Sun Hybrid Storage Pool strategy and how it works with flash. The presentation here is of particular note.


The post that I found the most interesting and the reason I decided to write this is "Casting the shadow of the Hybrid Storage Pool." Mr Leventhal goes over the pros and cons of using flash as primary storage in an HSM array and correctly points out "The trouble with HSM is the burden of the M." Unless you have a good HSM tool that can slide old data to disk and leave the cache for 'hot' data, flash and disk combo arrays become a burden. Veritas addressed this issue in their VxVM product years ago to handle small fast drives vs big slow drives. So there's already a lot of ground covered in the industry here.   


The other approach you can take with flash drives is to use them as cache. It's a great idea. They're like memory that doesn't need battery backup or de-staging for power outages. As Adam puts it "Tersely, HSM without the M." Its the same school of thought taken by the HDD makers who slapped a few hundred MB onto their laptop drives. Fast access for the data you needed and bigger/slower storage for the rest. That idea never took off because no one wrote the drivers to take advantage of it (it's not as easy a problem to solve as it sounds). Well, in this case, Sun's "written the driver". In this case they've integrated it into ZFS. Pretty good strategy. They're certainly not alone. Netapp has taken a similar approach. I like the idea of getting the flash performance but removing the need to know about flash. Hybrid storage pools (HSP) could turn into the next storage optimization trend that all the vendors adopt.


The detail that makes me thing Sun's on to something is this isn't the only flash/HSP announcement recently, nor the only avenue they're pursuing. There's the "Open Flash Module" which is a JEDEC form factor flash drive for servers (kind of like a SO-DIMM that plugs into the mother board). The initial capacity is only 24Gb, but that will grow over time. If you take the drives out of a server, their power and size can drop considerably. This could be interesting for the embedded server and telco markets. They've also announced a truck load of servers with extensive flash support . There's also their NetApp competing product The Sun Storage 7000 series. Then there's the flash based optimizations like Logzilla integrated into products. The point is that Sun isn't taking a 'lets graft flash onto our existing products' approach. They're not simply replacing existing components with flash equivalents and saying 'we do flash!" They're embracing and extending what flash can do. Now, that's not to say everyone else isn't as well, it's just that Sun is more open/up front about it. I think sun is on to something with their product designs that utilize flash at multiple points across the product line.


So design is one thing, but Implementation is another. The best designs can mean nothing if the implementation is poor. According to several reviews, my beloved t-mobile G1 is an example. I'm no expert on server design and engineering so I'm just speculating here, but all of Sun's design work can be for naught if they screw up the implementation. If it doesn't live up to the design's promises because the hardware doesn't hold up it's end, if it's just a bear to manage, or if it does things in such a non-standard way, then it will likely fail. On top of that, the hardware and software needs to be reliable and fail sanely. Nothing is a bigger product buzz-kill than data loss or down time. But lets assume that the implementations are reasonably sound.


The next hurdle for me is delivery. How does Sun deliver this design and implementation to you? Great design, great implementation, but horrific admin software would kill adoption. If the learning curve is too steep or it requires a change in thinking from the current method of doing things it will also slow adoption. I'm struggling to think of a good example of a change in thinking, so this example is a bit weak. If the current line of thinking is to "S.A.M.E." your data (stripe always, mirror everything), but Sun's approach is to "S.N.M.N." ( Stripe nothing, mirror nothing), this will hinder adoption as well. Because the industry best practices from software vendors and other 3rd parties will advocate "S.A.M.E." and the sys admin will constantly be fighting the 'but this is different so I don't have to S.A.M.E.' battle. At some point, people just stop pushing the rock up hill and forgo the benefits of doing it right for peace and harmony with their co-workers. Another delivery obstical is that the default or basic implementations must also be sane for the majority of deployments. So a ZFS pool with hybrid storage should provide benefits and perform well with default configurations. It doesn't have to perform optimally but it should perform well. If it requires infinite tinkering specific to every use case, then there will be a flood of experiences at each end of the spectrum. People who love to tinker and fine tune will offer up tales of wonderful performance and extoll it's virtues. The 'set it and forget it' crowd will likely have a different view and will poo-poo the product every change they get. Lastly, they need to get the flash optimized options in front of people for around the same price or with minimal added cost. Price can be a significant barrier to entry.


It's an uphill climb to be sure. But I think Sun is on to something. If they can execute their vision and deliver on the promise of Hybrid Storage, they can become a relevant player in the storage market. Here's hoping the 'previous performance is not an indicator of future success' axiom hold true for Sun in this case.


Tuesday, March 10, 2009

In case my recent birthday wasn't proof, I'm old...

So, if it wasn't already clear to me by my birthday and the fact that I'm now pushing 40 reeeeealy hard, here's an SMS conversation I had the other day that I could barely understand.


First, I should note, it was a wrong number, second I have a full qwerty keyboard so typing isn't as big a pain for me I guess.



HER: Hey



ME: Hey? This is rich, who is this?



HER: Sarah:-)



ME: Ok... don't think I know you



HER: Sarah jjs gf



ME: Uhhhh doesn't ring any bells wrong #?



HER: Ugh no i go to ur skwl sms u no u jus found out how i look lyk gosh



ME: I'm 39 and haven't been in school for years



HER: O ok sory rong #



So Sarah is the girlfriend of "J.J.S." I presume. And If I read the long reply right, we go to the same school. And I'm pretending not to know who she is because I think she's ugly. That's my guess anyway. I'm glad I don't know any Sarah's well enough to SMS with them. Not that I SMS much, but if I did and started talking to the wrong Sarah I could be on a dateline special... YIKES!


Friday, March 6, 2009

Desktop or Laptop part II

Previously, I was debating myself on whether or not I should get a laptop or a desktop to run virtuals on.. Well things have changed a bit. On March 3rd, Apple upgraded the iMacs. They go to 8Gb now. Further, HP makes a 17" laptop that supports 8Gb and is around $760 (after rebates) maybe even cheaper elsewhere. Slam dunk, you'd think right? Replace my 2006 iMac with a new iMac beefed up to 8Gb and all my problems are solved and I didn't take up any more space on my crowded desk. Ahh, but life's never that simple. Go to the apple store and try to buy a 20" iMac (or any size, doesn't matter) and customize it to 8Gb. Go ahead. I'll wait.


Did you see what happened to the price? YIIIIKES! That 1199 iMac just jumps to 2199. OUCH! I could buy a drawer full of Asus laptops for that and just set them on the floor when I need them. Same price jump is true for the HP laptops. It's between $400 and $600 per 4Gb SO-DIMM to upgrade these things. So I could go the 20" iMac route (or treat myself to a 24" iMac) and then go 3rd party at a later date and not pick up yet another device burning electricity in my house. I like that aspect. Keeps the same footprint but gets over my memory limitation.


Well I have the desktop I'm going to buy picked out. A Dell XPS 435MT. Costco and Microcenter both have them for $999.99 (until march 31st). Other places (dell.com perhaps) probably have them too at that price. It's an intel i7, which is quad-core with hyperthreading, which is kinda like having 8 cores. It comes with a 750Gb drive plus 6Gb of ram, and a good assortment of ports and expansion options. So I get lots of horsepower at the expense of floor space, lugging out the monitor every time it needs to be rebooted and the fan noise. But it would be SWEET!


If it weren't for my desire to have a small, relatively clean computer area this would have been a no-brainer. Why do I need to make things so complicated?


The next assalt on storage arrays...

Chris Evans had an interesting post discussing the cost of enterprise storage. This post was spawned by a question posted on ittoolbox.com by Ditchboy434. Yikes, it's like I'm spreading rumors in the eight grade again... and then she said that he said that they said... Anyway, the brass tacks are: Why is enterprise class storage's cost per GB so drastically higher than personal storage? Why does 50Tb of enterprise storage cost $500,000 but 50Tb of personal storage cost $5000? (Seriously, 50 1Tb drives can be found for around $100 each) Chris accurately points out all the reasons why Enterprise storage is more expensive. It has added value. More hosts can share that 50Tb. You can cluster nodes with that storage. It has redundant/dual everything. Fancy features for backup and spare copies, etc. It's fairly straight forward to explain what those extra zeros get you and for the most part the business case holds.


But I think there could be a change on the horizon. Maybe it's already here and I just don't know it. With local storage getting so large, it's possible to put storage amounts in a 4U server that would dwarf arrays from 5 years ago. Even in a 'standard' server it's possible to put ridiculous amounts of local storage on a server. It used to be the only way to attach large amounts of storage to a server was to attach it to an array. That's not the case anymore. Now the array is surviving on it's RAS and feature sets. So you might get an array so you can move data between two servers in a cluster. If your primary sql server fails, the cluster moves the storage to the backup node automatically and your back in business. But in MS SQL 2005, for example, it has the ability to mirror a database in software. You no longer need an array. Just put equal amounts in both boxes and you're on your way. The price per Gb of individual drives and the price to performance ratio of servers has gotten so low that it's now a real option. Instead of a redundant array of disks, you can have a redundant racks of servers. More and more enterprise apps are adding replication abilities at the app level. And as that replication moves up the hardware stack, the case for enterprise arrays gets harder.


A brief tangent. Cloud storage is the latest thing in storage. The idea is you tell 'the cloud' to write your data and it takes care of finding a home for it, making sure it's secure, and making sure it's protected from loss. The same kind of features an enterprise array provides, but outside of a single array now. Provided you have bandwidth to spare you can have the storage spread out across the globe. Or you can simply buy storage as a service from a vendor, like Amazon. If you have the bandwidth, it can be a great way to handle ever growing storage needs.


Ok, back on track. So cloud storage in most cases means wide area. I.E. Not within a single data center. If you're in a single data center, don't buy our cloud storage product, just buy an enterprise class array instead! Or at least that's how I think the argument would go. But what if someone comes up with an easy and efficient 'cloud within the data center' scheme? What if you could suddenly take all that local storage and pool it into "our storage" and share and share alike? I believe there are already some niche products that do something similar to this, but they have requirements and restrictions (and perhaps consumer perception issues) that has prevented broad adoption. What if, for example, Sun's ZFS was able to work across servers with minimal admin intervention? I have to throw that caveat on there because what I'm describing could loosely be considered ZFS and NFS used together, but there's a lot of admin intervention there. Plus I would envision it doing iSCSI or FCoE instead of the NFS protocol. Now who needs an enterprise array anymore? When you fill a rack in your data center, you'd get a 2-for-1 special. One rack would have the computing power and the storage.


Now, having said all that, it'll be a long time before anything resembling this comes to pass. Gig-E vs 2, 4 or 8 Gb fiber channel is one reason. Storage I/O performance is another. One of the big attractions to enterprise class storage is the raw performance of it, and that will be hard to overcome for now. On top of that, with the way storage vendors snipe at each other, their skills will be well honed to attack a common enemy.


On the upside, failed drives could lead to an interesting game of whack-a-mole. With all the leg work running from rack to rack, sys-admin obesity could be a thing of the past.


EDIT: corrected some spelling, grammar and the price of 50 1Tb drives.






Tuesday, March 3, 2009

Cats and foil....

So I’ve always been told that putting tinfoil on counter tops and the like will keep cats from getting on it. Suposedly they don't like the sound or feel of it. Lola likes to get in front of the TV, esp when we’re playing Wii. So I decided to put foil on it to keep her off of there. Here’s what Lola thought of my foil and attempts to remove her from the center of attention:

200903031159

She so unbothered by the foil that she even naps on it.

200903031200

I'm not about to break out the squirt bottles next to my HDTV…. Oh well, as long as she keeps her head down and I can see the screen…

 

Thursday, February 26, 2009

Desktop or laptop....

I've been thinking about getting another laptop or desktop lately. My goal would be another machine to run virtuals machines on.


Currently I have a macbook pro with a core 2 duo and 4gb and an imac with a core duo and 2gb of ram. Since the imac tops out at 2gb and is 32-bit only it has limited usefulness for what I want to do. I want to be able to run windows 2008 server or solaris 10 to practice and learn with. I can do some of it with my MBP but a 2nd machine to network and cluster with would be ideal.


So this brings me to my choices. I could buy another laptop with 4gb of ram. It has the advantage of coming with a built in monitor, keyboard and mouse. I can stick it in drawer when I'm not using it. I don't have a lot of space so that's a big plus. Its relatively quiet too. It does have less performance and ability though and isn't upgradeable and tops out at 4gb. I think I can work within those limitations. I see geeks.com has a few 17" dual core laptops for around $499. (they go in and out of stock every few weeks) Add about $75 to the price to max out RAM and add an external drive to run the virtual machines on and you're all set.


I could buy a desktop and monitor for about the same price. It would top out at 8gb (or more). It could probably run vmware esx or Sun xVM, which I really want to play with. It could also have more cpu-cores for the same money. Lots of advantages, but I already have the iMac desktop. I'd have to get another monitor and keyboard (not a big deal, but it takes up space). And it'd likely be pretty noisy relative to a laptop. For example, CyberPowerPC has an intel i7 (quad core plus hyperthreading for 8 cores for HT aware OS's) for $789 (as of February 26, these things tend to change a lot). Now I have CPU power, lots of RAM headroom, faster drives. But it's harder to put this thing away and take back out when I want to work with it.


It's a tough choice. I think in the end I'm going to go with the desktop. Although it takes up more space, I can do a lot more with it. It also has a lot more long term life and can be upgraded as technology advances so some of the costs down the road can be avoided. With the laptop, the only step is to buy a new one, but that space savings and the conveniences of being able to take it with me is pretty tempting. I guess the 'left field' choice would be to replace the iMac with a Mac Pro (or a Mac Mini if the upgrade rumors are true and the specs are right). I hate it when choices are never cut and dry like this.


Monster got bit...

Good thing I started job hunting today or I would have never have found this out. Some how I missed this in my downtime between engagements. Looks like Monster.com had a serious security breach. They forced me to change my password and gave other 'just in case' warnings to me.


They say:



As is the case with many companies that maintain large databases of information, Monster is the target of illegal attempts to access and extract information from its database. We recently learned our database was illegally accessed and certain contact and account data were taken, including Monster user IDs and passwords, email addresses, names, phone numbers, and some basic demographic data. The information accessed does not include resumes. Monster does not generally collect – and the accessed information does not include - sensitive data such as social security numbers or personal financial data.



I say: You got hacked.... 'illegally accessed' is like using collateral damage to describe innocent bystanders.


They say:



Are you contacting consumers directly?



Monster elected not to send e-mail notifications to avoid the risk those e-mails would be used as a template for phishing e-mails targeting our job seekers and customers. We believe placing a security notice on our site is the safest and most effective way to reach the broadest audience. As an additional precaution, we will be making mandatory password changes on our site.



I say: "We're hoping people don't notice, but they're going to find out anyway. Oh well!"



I like this one:



What security measures do you have in place?

Monster has made, and will continue to make, a significant investment in enhancing data security, and we believe that Monster’s security measures are as, or more, robust than other sites in our industry.

Monster has a full-time worldwide security team, which constantly monitors for both suspicious behavior on our site and illicit use of information in our database. To maintain the integrity of these security and monitoring systems, we cannot provide further details.


Fat lot of good it did 'em! Ah well, it's not that bad in the end. I mean if you're on Monster.com, you WANT people to find your resume.... Now it just happens faster. Hopefully I'm one of millions of people who were found and I fly below the radar of who ever has the data. Besides, who'd want to be me anyway?

Thursday, January 22, 2009

More google auto updates with out asking...

Googles doing me wrong again... A while back I complained about google auto-mounting a disk image and then updating software with out letting me know or asking my permission. Well, they've done it again. Yesterday, hardware growler reported that "GoogleVoiceAndVideoSetup_1.0.5.634" was mounted and unmounted. So looks like the GoogleTalk feature of Gmail was updated. Thats a good thing, updates are welcomed. Not asking me first is not. And in this case, I don't even see a preference to disable this feature. So GoogleTalk is phoning home, downloading updates and installing them, all with out a single notification.

What to know some more scary stuff? How about this:


Jan 20 20:36:23 rwhiffen-macbook installer[10817]: Package Authoring Warning: GoogleVoiceandVideo.pkg authorization level is AdminAuthorization but was promoted to RootAuthorization for compatibility, ensure authorization level is sufficient to install.

Jan 20 20:36:23 rwhiffen-macbook installer[10817]: Package Authoring Warning: GoogleVoiceAndVideo.mpkg authorization level is NoAuthorization but was promoted to RootAuthorization for compatibility, ensure authorization level is sufficient to install.


 

And if you look in /private/tmp you will see that, yes indeed, google did stuff as root:

 


rwhiffen-macbook:~ rwhiffen$ ls -ld /private/tmp/GoogleVoiceAndVideo.mpkg.10817E8zL7g/


drwxr-xr-x 3 root wheel 102 Jan 20 20:36 /private/tmp/GoogleVoiceAndVideo.mpkg.10817E8zL7g/


rwhiffen-macbook:~ rwhiffen$


 

So not only is it secretly phoning home, downloading an update, it's doing it as root. Now I explicitly authorized root access upon install. So the update having root ability is by design and I authorized it by typing in my password when I installed the software the first time. But I did not authorize subsequent use of that authorization. It's scary to think what trouble this could lead to. I'm assuming google has some kind of cryptographic controllers to test for legit updates before snagging them, but what if they don't? What if an ISP gets it's DNS hacked and they set up a fake update? It'll run as root with out anyone knowing. I guess I wouldn't have such an issue with it if I had an option to opt-in or opt-out.

So today I'm going to sign up for the google groups and use some "ALL CAPS" language and see if I can get any kind of response. Probably not, but it's worth a try.

 

Tuesday, January 13, 2009

Lots of Syncing contacts.. (G1 saga continues)

So in the past, I used to sync my contacts via bluetooth to my phone. Well because Android is half-baked, it has no such ability on the T-Mobile G1. But, as an alternative they have a periodic OTA sync feature with your google/Gmail contacts. This, it turns out, really hurts your battery life. It's not a push from Google. Your phone wakes up every 5 minutes (from what I gather from forum postings) and pulls contacts, calendar and gmail changes from google. If your away from WiFi it's done over G3/Edge. So I turned that off and force a manual sync on demand, just like I used to do with my Nokia phone over bluetooth.

First order of business, get the google calendar into iCal easily. You could already subscribe (read only) to your google calendar, but you had to go into your google calendar to add events. Well they added calDAV ability to google calendars which overcomes this. This gets me where I wanted to be a long time ago. I wanted the google calendar to be my primary calendar but didn't want to have to depend on web access to update.

So to get my Macbook Pro contacts to google I had to first enable google syncing. If you have an iPhone or an iPod touch, this feature is available to you. If you do not have one of those, no worries, there's a hack to enable it, which I did. I even had a old ipod entry to hack to make it work (my 30Gb 5th gen ipod was stolen).

Address bookscreensnapz001

It works great, but the merge of the google (and yahoo in my case) contacts with your address book is terrible. Basically you end up with a ton of duplicates and contact entries for every 'suggested contact' email address you email too. Bleh! Because I have both Yahoo and Google sync enabled those email address-only and duplicate addresses went to yahoo too (why yahoo? because I can...). YIKES! To make matters worse, a lot of contacts were missing critical information, like phone numbers, address or email addresses. It seems the 'merge' didn't work at all. The fix was to delete all the yahoo and gmail contact entries, and then within iSync (which doesn't really make sense because iSync is only used to sync to phones) reset all sync history.


Isyncscreensnapz001

Then I manually cleaned up my Macbook Pro contacts (painful, but not the end of the world). Then I ran sync from the menu bar again.


Ectoscreensnapz001

Presto! Data in three places. Then I manually sync the G1 (settings -> Data Syncronization, press the menu button, 'Sync Now') and the same data is now on the phone. Data now flows in all directions effortlessly. Eventually I'll figure out how to put a 'sync now' shortcut on the phone so I don't have to drill into the menu.



But there's always a catch. In this case, the catch is Instant Messenger. Well, it turns out (and I think it warns you) that if you delete a contact who is also an IM contact in either service, it deletes them from your IM. DOH! So now I don't have any of my gtalk or Yahoo IM contacts. Fortunately most of my IM is MSN or AIM. But still, not fun. Now I have to go back through my address book and look for yahoo and gtalk people, like Siva, Tariq or Dayton and re-add them to the corresponding client. If I had it all to do over, I would log into the native clients and try to export my IM contact list. Then again, maybe it's a good thing. I had a contact who I can't for the life of me remember why it's there. Well it's not there anymore...






Monday, January 12, 2009

Testing blogging from my g1

Wpid 1231810685146

In the google market place (which needs improvement) there is an app called wpToGo which is supposed to allow blogging from your android handset. So I thought I'd give it a shot. The keyboard on the G1 is pretty bad for the kind of stuff, esp with my big thumbs.

Not sure I'll ever use it again, but its nice to have. It will even let you upload a picture, which could be fun. The picture I posted is of Renee at the nation building museum with her classmates.