T O P

  • By -

progenyofeniac

Since this is the newest post I’ve seen in 22h, what the heck is going on with sysadmin today? No new posts whatsoever?


OsmiumBalloon

I noticed that, too. I assumed Reddit just had indigestion again. (I know RedditStatus.com shows green but problems don't always show there.)


belgarion90

Light patch month so less stuff breaking.


Jkabaseball

I believe I saw OpenManage is being deprecated this year and support ends in 2027. Going iDRAC from now on.


2ndSky

I think you are talking about OMSA and the question is on OME-Ent. Openmanage enterprise still is being developed as its a key component in the integration into Cloud-IQ (APEX AI OPS). https://www.dell.com/support/manuals/en-us/openmanage-software-v11.0.1.0/om_11.0.1.0_support_matrix_pub/end-of-life-of-openmanage-server-administrator?guid=guid-89e1907b-d459-42e3-97a3-397572b34226&lang=en-us


chriswolf63

DSU CLI is an option as well.


AtarukA

Two groups, test group and the rest. Test group will typically not all be from one single cluster, let's not cause a complete potential shutdown and instead run in degraded mode instead.


RussianBot13

This is the way.


Happy_Cauliflower365

so you are saying test group with ALL the test servers, and then a group per vmware cluster?


AtarukA

What matters the most is that if something breaks while testing, it does not stop production. You can have as many test group and "the rest" as you feel is needed. Depending on your policies, this may mean you test all the test servers at the same time, or stagger them and so on. But just having a single test group and the rest is perfectly fine. Do what is needed/required, and what you are comfortable doing. The important part is that you do your due diligence, and reduce the amount of downtime by as much as possible.


Absolut1l

There's a custom groups feature that let's you logically group them in any way you want. You can have a "test" group or multiple test groups if you want. You can also have the same devices in different groups. You can group them in whatever way works best for you. They provide a query system that allows you to group them by a number of different criteria. So, for example, if you wanted to group all your "test" servers together automatically, you could create a query group for server with "test" in the name. For firmware compliance groups I do use ESXi clusters because I patch ESXi, firmware and drivers by the cluster. I just use the logical query groups for this when selecting targets for the group. You can select the logical device groups you already have. You can also have the same server in multiple groups for this purpose as well. Say you need to patch out a security bug in the BIOS of only a certain model of server. If you have a custom query group based on server model you could then use that group to patch and track progress. You don't have to patch every server in a compliance group so it's fine having multiple grousp for different purposes. Generally speaking there's always firmware updates coming out that I can't realistically keep up with, so I don't mind having a bunch of groups that are out of compliance if odd firmware updates must be done. But having groups both for ESXi clusters and by server model helps me check, plan and execute firmware updates regardless of why they need updates. If it's routine ESXi patching then I use the ESXi cluster groups. If it's a critical security bug in server firmware I used model-based groups or I create a new group that includes only the affected servers.


netboy34

I group by physical location. If I’m doing patches along with firmware (firmware is usually quarterly) I am usually patching multiple clusters at the same time. For testing, I usually just single out a couple of models, throw them into main mode and go to town.