Diary of a Mad Technologist
  • the BLOG
  • about ME
  • about THAT PICTURE

Dell Considers the Inconsiderable...Going PRIVATE? Maybe That's Not Such a Bad Idea After All...

1/24/2013

0 Comments

 
Here's an interesting one! There has been a lot of buzz surrounding the concept that the rootin'-tootin' cowboys of Round Rock, Texas fame are pondering a move to buy back their stock and take the company private. I am, of course, talking about Dell - the company that has so many different tech fingers in so many different tech pies, Little Jack Horner would stand in awe of them.

And, while most consumers and other customers could probably care less on whether this even happens or not, I will be watching with big, dripping wads of excitement to see if other tech companies follow a similar path. My hope is that Dell starts a trend, because I have ALWAYS believed that 99% of the information technology companies out there should be privately held.

I can hear all three of my blog readers right now - "WHAT?? ARE YOU SOME KIND OF NON-CAPITALIST, SOCIALIST SCUMBAG??? It's every company's RIGHT to offer up publicly traded pieces for the COMMON MAN to invest his or her hard earned money and potentially reap the benefits of the company's growth! As a matter of fact, if they DON'T go public, the terrorists win!"

Ok, ok...maybe the terrorists thing was overkill, but you're right - every company should have the right to go public, but hear me out here...even a writer over at Inc. magazine thinks it's not such a bad idea...

Let's forget for a second the fact that I have a constant, skeptical eye cast towards the concept that the modern day stock market is based on anything but the utopian concept of supply and demand. You can read about what I feel is really skewing the markets here and here...I won't get into that topic for now.

No - I want to consider Dell's move to return to private ownership as a good thing because it allows them the flexibility and opportunity to get a little risky with what they do with their business. See, the information technology world is one whose fire is stoked and grown through constant innovation. And "innovation" as I define it, means, "...trying some pretty crazy things that take time to determine whether they are a success or failure." Tech companies who don't innovate and subsequently rest on their laurels are doomed to be stuck behind as the IT field evolves. Don't believe me? Just ask this company or this company if they would agree with my hypothesis.

Anyway, back to the concept of innovation. The main variable behind innovation for any company is the concept of risk - the risk of money and manpower spent pursuing a concept, the risk of time it takes for that concept to come to fruition, the risk of the industry's acceptance of the innovation and willingness to pay for it, etc. And risk is something most modern companies have trouble balancing while also being slaves to share prices, market valuations, and other financial variables that investors demand to stay forever healthy and growing over 90 day stints of time.

This brings me back to what Dell's potential might be if they are able to become a private company. If they are smart, they will use the "going private" opportunity to trim the fat on past acquisitions that were head scratchers (remember Force10 or Perot Systems, anybody?), and spend the time wisely to refine some pretty awesome things that seem to go unnoticed. Take Project Copper for example - it's their project to develop an EXTREMELY low power blade server solution based on ARM processors. As we all know, electrical power is king in the datacenter, and if a company can save some of it by using different equipment to get the same results, maybe THEIR stock price will go up :-).

Other tech companies should watch what happens if Dell goes private and consider following the same suit. After all, survival in the Kingdom of IT is boiled down to one, simple statement:

Innovate or die.
0 Comments

I Heard the News Today, Oh Boy - Apple Shuts Down ZFS Open Source Project

11/1/2009

0 Comments

 
Dang it, darn it, and shucks all around as I pour a 40oz of Red Bull into the closest gutter I can find - Apple has officially announced they are no longer pursuing the possibility of integrating Sun Microsystems' ZFS into its OS X Server operating system.  Talk about a blow towards advancing the cause of offering an integrated, self-sufficient way of protecting data to the COMMON MAN...I can hardly type these words in light of my disappointment.  Well, maybe I'm not THAT upset...but STILL...

At any rate, for those readers too lazy to click the above link, what is (and soon to be was...just watch what Larry Ellison does with it once Oracle owns it) ZFS?  In essence, it was an Open Source filesystem that was self-healing and allowed for multiple instances of the same data to be virtualized across multiple locations that kept each other synchronized.  It also had other nerdy goodness like dynamic block sizing and rapid filesystem creation capabilities that put most modern filesystems to shame - I won't pretend to bore you, however, going too far into explaining these types of features...just yet...

So what does ZFS mean for for the common layperson? Imagine a world where your important unstructured data existed on a local location on your computer (say in the d:\My Data directory).  With ZFS, not only would your precious letters home to mom be warm and safe locally, but D:\My Data would be a part of a Storage Pool (called a ZPool -very clever) where the individual data blocks that made up the files would also be replicated to other physical locations you defined to ZFS. To make things even better, the process would be completely transparent to you as you accessed things because all you would see was D:\My Data\letter.doc.  BUT the subsystem would make sure things were kept in sync and safe among the disparate replicated blocks between the different locations.  What a GREAT idea - a virtualized file system that didn't place the burden of using some form of third party replication to accomplish the same task...awesome for Business Continuity paranoid types!

Now that I have you salivating at ZFS's potential, why can't we all just grab a copy of Open Solaris from "OrSUNcle" (Get it? A hybrid of names because Oracle is buying SUN - ha ha ha), setup a ZFS ZPool virtually with VMWare, and bask in the glow of a data protection world that is all puppy dogs and ice cream until the end of time?  Because, gentle reader, I will bet you dollars to donuts that Sun's ZFS implementation isn't, as we say in the IT world - "Intuitively Delivered". I will also continue to speculate its integration with Solaris involves working with lots and lots and lots of Old School "fun" tools like VI, a little AWK, and a sprinkling of GREP too...just because it's there and we all know EVERYBODY in the UNIX world prides themselves on how complicated things can be made.

Apple would have changed this mentality as it related to ZFS.  They have had a great run as the company that brought UNIX to the masses through OS X.  Chances are quite good they would have done the same for ZFS...I dream of an Aqua interface that showed GORGEOUS (to borrow from Steve Job's vernacular) icons that simplify the multiple destinations of a ZPool, and an easy to understand, value-based representation of the links between the different sites so ZFS could prioritize its replication.  It would have been AWESOME, I am certain.

C'mon, somebody - get me a usable version of ZFS!!!
0 Comments

Pondering FCoE - A Solution Looking for a Problem?

10/19/2009

0 Comments

 
I will admit - as a long time, flag waving zealot of VMWare and virtualization in general, I am by proxy also a fan of block-based, enterprise data storage that is delivered from Storage Area Networks (SANs).  It's fast, flexible, and provides a simple way of expanding storage presented to end hosts from a central location.

The concept of centrally shared storage is nothing new, with its roots reaching back to the 1980s as a way to allow for mainframes to share a common physical repository of disk drives, which at the time, cost more per MEGAbyte than the currently proposed U.S healthcare overhaul.  From this world - Fibre Channel was born.  It was fast, flexible, and was complicated enough to keep the egghead mainframe guys happy.

As Open Systems (think Microsoft and PCs) started to grow in their adoption for businesses, Fibre Channel was able to shift to that world as well and provided system administrators a way to provide centralized, scalable shared storage for applications that resided in their world (i.e. databases, e-mail, etc.). While the technology had a tendency to drift to the expensive side, it was well received by those who could afford it and the proprietary nature of its media and employment.

By the time we bid adieu to the 20th century, a growing legion of system administrators began needing the benefits of centralized, shared storage but didn't want to pay the premium price for the infrastructure associated with Fibre Channel.  Thanks to a committee of nerds at the IEEE, iSCSI was born and boy was it SWEET. Instead of using proprietary protocols riding on proprietary hardware to access storage, you now could do the same thing by executing SCSI commands that were encapsulated in IP packets and delivered over a network fabric everybody already had installed and loved - Ethernet. Now, the Common Man could get shared storage like all of the cool enterprise folks and glean its benefits.

In fact, the concept worked so well, all major enterprise storage folks adopted it into their offerings - EMC, Network Appliance, Equalogic, and others rapidly jumped up and said, "Me too!" and were subsequently exposed to a set of customers and revenue that benefited a much broader set of IT operations.

Today, it's hard to NOT find iSCSI existing in many corners of most datacenter operations, and it is mainly due to the fact it just works and is easily implemented on existing Ethernet-based switches and NICS.  Even better, the introduction of 10Gbps Ethernet is making iSCSI shine brighter than Fibre Channel in terms of speed (FC's peak throughput right now is 8Gbps).  What's not to love?

Now, there is an initiative by some forces at work to want to introduce a way to run the Fibre Channel protocol over Ethernet, and I am left scratching my head as to why I would want to adopt it if iSCSI is already working. Granted, there are some performance efficiencies built into how the FC protocol works, but I struggle to understand adopting it if 10Gbps iSCSI (or even NFS for file level access) is working just fine.

Just because you CAN do something doesn't mean you SHOULD do something, right?  What am I missing?

<End Rant>
0 Comments
    The Funniest Joke in the World

    About the Blog

    Random rants and diatribes from a guy who finds himself wading knee deep through the mire of Information Technology.

    Archives

    December 2016
    February 2013
    January 2013
    November 2012
    October 2012
    September 2012
    October 2011
    September 2011
    February 2011
    January 2010
    December 2009
    November 2009
    October 2009
    September 2009

    Categories

    All
    Apple
    Cloud Computing
    Data Storage
    Enterprise Storage
    Hardware
    Information Technology
    Microsoft
    Operating System
    Vdi
    Virtualization
    Web/Tech

    RSS Feed


    View my profile on LinkedIn

Copyright 2015, Diary of a Mad Technologist. All rights reserved.