[
Date Prev][
Date Next][
Thread Prev][
Thread Next][
Date Index][
Thread Index]
[
List Home]
Re: [aperi-dev] Full support for Solaris?
|
On Aug 28, 2007, at 6:35 PM, Todd Singleton wrote:
Dale! Great to hear from you - a SAN admin. Solaris support?
Let's make it work... I'll ping a few people that can better answer
your question. Stay tuned.
Excellent.
Also, please feel free to let us know your thoughts on the product
- good, bad, and ugly. In that way, it can evolve in such a way to
better serve you.
Well, I must say that I actually haven't /used/ Aperi yet, only
getting as far as unzipping a recent snapshot on a Solaris 10 x86
server of mine and rummaging through the code, configuring it, and
just seeing how far I could get which is when I realized that there
was actually some C code involved thus prompting my post here. I will
tell you, as a administrator of a medium-ish sized SAN[1], what I
have been looking for and how I see Aper addressing iti, based on
what I've seen so far. I do realize that Aperi is a new-comer to a
very sparsely populated block, and baby steps are the order of the day.
2) can you name any other areas outside of Solaris support that
need attention in order to meet your needs?
One thing that comes to mind is performance management. I have bunch
of RAID arrays (Sun StorageTek 6140s, which, just like your own IBM
DS4700, is of Engenio progeny) that allow volumes to be nailed to one
of two controllers, and due to the wonders afforded by multipathing,
allows volumes to be shuffled between them as needed. Obviously, some
volumes are utilized more than others and the problem of having one
controller being drastically busier on average than the other is a
real concern, especially when it comes to maintaining high cache
hits. A tool that would help one visualize load on a per-volume
basis would be great. This would allow the admin to take a look at an
array and move a busy volume to a less loaded controller, with the
goal of getting things balanced (and making the quiescent controller
earn its keep, damn it.) The upshot of all of this is that in the
event of a controller failure, the admin can have a reasonable
expectation of what impact there will be on the remaining controller
after it assumes control of the dead controller's volumes along with
its own.
This feature could then be parlayed into a SAN- or array-wide volume
utilization/performance visualizer.
Other features of interest would be the ability to centrally managing
LUN masking along with zones, and giving the host agent the ability
to grok N-Port virualization once that feature finds its way into
OSes (it's soon to be a feature in Solaris at least, where it is
useful when using Xen or Solaris Zones, and other non-Solaris
thingies such as VMware)
3) are you interested in becoming a contributor?
We'll see. I'm just getting my feet wet with Aperi. I wouldn't rule
it out, though :)
[1] https://spaces.umbc.edu/display/CIG/Core+Storage+Fabric
/dale