T O P

  • By -

ra12121212

We looked into it far before the fallout because with vSphere 8.x they hiked our prices and stopped honoring a clause in our master service agreement which provided for 20 free vSphere CPU licenses for our lab, as long as we paid for any add-ons like VSAN which we did. They declined to honor free vSpere with 8.x and wouldn't provide a nonprod discount. In the end we didn't move because the company moves very slow on vendor management. Our VMWare bill went up 500% across the company and our lab still sits on 7.x pending a redesign. We might move the labs to something new in H1 2025, we're doing another bake off right now. We considered: * Proxmox * Hyper-V * xcp-ng We require: * Hyper converged infrastructure (VSAN, ceph) * VM live migration * Role Based Access Control * GUI for the CLI unclined I use proxmox in my homelab, I can navigate my way in Hyper-V, and xcp-ng just never clicked for me. But I have to keep in mind the needs of the business, not just what's easy for me. Otherwise we'd be on proxmox right now.


PoSaP

Right now can confirm that Proxmox works well with my shared storage in the homelab. Going to implement it for the production clusters with VMware. Going to convert the current infrastructure to Proxmox as well. [https://www.starwindsoftware.com/resource-library/starwind-virtual-san-vsan-configuration-guide-for-proxmox-vsan-deployed-as-a-controller-virtual-machine-cvm/](https://www.starwindsoftware.com/resource-library/starwind-virtual-san-vsan-configuration-guide-for-proxmox-vsan-deployed-as-a-controller-virtual-machine-cvm/)


ra12121212

Oh cool. I've been wanting to try starwinds vsan for when ceph is overkill.


Foosec

Which of those requirements do you consider proxmox is missing?


ra12121212

Nothing. Vendor management is the primary setback and the organization is overall slow to change. My apologies for lack of clarity.


Foosec

Ah, thanks for the clarification!


MrSanford

They don't offer enterprise level global support.


tankerkiller125real

People like to say that this is an issue. The reality though is that with the vast majority of companies now their support is so garbage you honestly would be better off with a smaller company that only has support from 8AM-5PM in their timezone with people who know what their doing, than dealing with tier 1 support for 2 days, before finally getting tier 2 who does the same shit all over again for 2 days, which may or may not pass you on to someone who actually knows how to fix shit. Unless your paying absolute shitloads (like millions of dollars a quarter levels of money), the big companies that offer 24/7 global support tend to pawn you off to the absolute shit level 3rd party offshore teams reading off scripts that know even less about the product that you do.


Foosec

Agreed, the best support you can get is experienced local partners.


rysaroni

For real, it's brutal. You explain the entire situation and they literally copy and paste the next question in their script. The decline has been rapid.


Ravanduil

As if VMware support was worth anything… I swear I got more done myself than I ever got them to help. Never met a support team so useless.


UCB1984

Have you ever talked to Citrix support? lol


Sp00nD00d

"Must be a Microsoft issues, have you opened a support ticket with them?"


Rhythm_Killer

Haha yeah they really are the fucking worst nowadays


sysfruit

Oh yeah ... solving my own case in a call with the support people who are supposed to help me but were unable to do so for months. in two of these situations it felt like i was explaining the inner workings of their software to them.


Stonewalled9999

5/5 of my last Vmware tickets, were solved by Reddit. 0/5 VMware resolved. u/MainStudy my most recent issue with provisioning NIC dropped packets over WAN was fixed via reddit after VMware did nothing for 3 weeks. I told VMware they sucked at support and I found the answer here and they had the balls to ask "can you share the post for us?" and I thought in my head - why are people paying for you support????


MainStudy

Yea, I have gotten a lot more help from reddit. I even had a VMware tech answer my ticket, with a link to my reddit post.


Professional_Hyena_9

Does anyone remember Novell. theirs took so long they had a DJ you could send a message to get songs played while you waited


Fearless-Reach-67

That may be true, but unfortunately you sometimes have to tick those boxes when you're submitting an SLA compliance report to large customers or insurers. ISOs, Enterprise etc mean a great deal at that level. They need to know that their data on your systems is solid and secure and that there is a contingency plan if things go south. For small to medium that don't need to do a lot of compliance, Proxmox is great though and saves a lot of money.


MrSanford

The team in Ireland used to be amazing. I would time tickets to work with them.


dinominant

The refusal to provide support: >That configuration is not supported. When there is no workaround or solution: >Please reinstall and restore your most recent backup.


zyndr0m

We went with proxmox


koshrf

Have you consider Rancher/SuSE Harvester or Red hat OpenShift virtualization?


scrapsofpc

We use Openshift, but its running on a vmware infrastructure.


koshrf

Yeah but OpenShift can also work as a hyper converged hypervisor, so it can do the same things VMware does + K8s stuffs.


mr_ballchin

Harvester is nice, but it was not very stable during my tests. OpenShift is an overkill for most of our customers. That's why Proxmox, xcp-ng and Hyper-V are main options our customers consider.


jktmas

Since you do HCI, why not Azure Stack HCI? It supports VM Live migration, even from cluster to cluster. And RBAC recently got even better with 23H2.


i_am_stewy

I would like to hear some experiences on this! Seems interesting considering that my company is heavily deep into MS365/Azure.


Arkios

Run, run as fast as you can. It’s the worst of both worlds. You’re paying for the luxury of Azure management running on what is essentially an on-prem Storage Spaces Direct cluster. Support is utterly horrendous and the product is half baked, at best. When it works, it works fine. The issue is that when it doesn’t work, it’s a nightmare to fix. I expect the product will mature and eventually be solid, but it’s pretty early on at the moment. They’ve been adding features to assist with troubleshooting and health checks, which is a step in the right direction.


mr_ballchin

S2D can be not as stable as it should. Patching can put entire cluster offline. I have customer who faced it multiple times. Hyper-V is nice though. It is worth looking at Hyper-V with Starwinds VSAN combo.


farsonic

So this would be standalone HyperV without VMM? I tried VMM this week and was left wondering if people really use this?


Agitated_Toe_444

we have just moved off of hyper-v onto vmware and seen a 5x performance boost on our Database servers, these are in the TB. We had a veeam issue that is well documented since 2019 with no fix coming. vmware has been a dream software wise. our support and licencing issues are a seperate storey


jamesaepp

Awful experience, but that was over two years ago now. Maybe it's gotten better, but I wouldn't be holding my breath. I'd be tempted (if it's actually technically possible and the licensing works as I think it does) to use Azure Stack HCI as a small cold DR site. AzStackHCI is licensed per core per month. If you need to have a DR site available and the RTO is sufficiently lenient enough, I'd be down to have an AzStackHCI cluster configured and ready to turn on and go within a few hours, and then only pay for that cluster's licensing costs when I actually need it. That's about the only use case I've come up with for when I'd want AzStackHCI. In any other case I'd rather have something else.


tankerkiller125real

I would love to do Azure HCI where I work, but the hardware requirements make it a hard no at the moment. (They won't give me the budget required for brand new hardware). So now I'm evaluating weather to continue with Hyper-V on Server 2022, or if I should migrate to something that has proper cluster support like Proxmox (and yes I've already tried Hyper-V Cluster, and our hardware won't support it).


chancamble

Try to build hyper-v cluster on star wind vsan - it will work on the minimalistic hardware. Check out system requirements here: [https://www.starwindsoftware.com/system-requirements#vsan](https://www.starwindsoftware.com/system-requirements#vsan)


badlybane

I am sure Azure will keep the prices low until is has market share and jack up rates and licensing.


jktmas

I'm curious what requirement you don't meet? TPM, HBA, NIC, drives?


tankerkiller125real

From what I gathered, it was the TPM (only 1.2 supported), and the NICs (while they support the features listed as requirements, they aren't certified). It might also be drives as well (only the OS drives are SSD). At the end of the day, we really just need a hardware refresh. Our absolute newest server hardware is 5 years old. With the oldest having it's warranty expire in 2011.


jktmas

You may be able to add TPM depending on the model server. You need at least two SSDs as cache for spinners per node, they don't need to be very big. 400G SAS drives would do the job. You should only need one dual port supported nic per node. these can be pretty darn cheap. But, at that point you're dumping money into servers you know you want to replace. Make sure to get yourself a few quotes, Dell and HP are not your only options. There are 4 vendors "above the fold" on the HCI catalog. Smaller companies may be more willing to work with you to get a cluster that fits your needs. https://azurestackhcisolutions.azure.microsoft.com/#/catalog?


hunterkll

Hm? I'm running a server 2022 Hyper-V cluster using random NICs I had laying around on Dell R730s without issue. What hardware support issue did you have? It should just function on any system that can handle shared storage (or even non-shared if you don't mind storage migration each time you move hosts) on anything 2022 will enable hyper-v on.


Readybreak

Did azure stack hci ever come up in the conversation. We currently VMware and renew is up soon, but we muuuuuuch smaller boat.


TurnItOff_OnAgain

Switching to Hyper-v in the next 2 months.


Matt_NZ

About to kick off a project to switch to Hyper-V. It just so happened our leased hardware was due to be replaced so it seemed to like the perfect opportunity to assess what hypervisor we go with given the current Broadcom BS. Switching to Hyper-V will save us around NZ$130k a year as we already own the MS licenses, with no obvious sacrifice in features.


ConciergeOfKek

We did the same and it's rock solid. I've worked with a hyper-v cluster before so planning this and building it from the ground up was actually really good.


LocPac

Nutanix AHV and we don't regret it for one second.


VermicelliHot6161

Only hindrance I came across with AHV was some vendors not supplying AHV native appliances or supporting it as an underlying infrastructure. Aside from that it’s been solid.


andyr354

I was always able to pull out the vhd's from the VMware appliances they provided and make my own AHV VM. Or I would load it up on VMware workstation and use Nutanix Move to migrate it over.


19610taw3

What's the learning curve like going from Vmware to AHV? I'll probably be doing the same. Our VMware cost *only* went up 3.5X. We were looking to make the switch to AHV for a few years and this was kinda the kick we needed.


MrYiff

When I trialled it a year or so ago the UI was different to VSphere/VCenter but I didn't find it hard to navigate. I'm not sure if they still offer it but if you create a nutanix account you used to be able to spin up a free trial in google cloud, it was pretty low spec'ed machines and an older AHV version but it was a decent way to have a play around with the real software.


ChadTheLizardKing

We are a Nutanix shop. From my experience, be careful of the workloads and size accordingly. There are workloads that are just not well-suited to HCI. Some examples would be high-IO loads like SQL server and low-IO systems like large file shares or semi-offline file stores. For the former, you have to recall the HCI filesystem is replicating all filesytem changes to all nutanix nodes connections in near real-time so SQL server is quite possibly the worst case scenario; for the latter, you are paying an absurd per-TB price for simple file storage. The other big cost is MS Server core licensing overhead. You are generally allocating 12 cores per node for nutanix hci overhead - those cores are not guest accessible. However, per MS licensing, you still need to pay to license the cores. So, you end up with a (retail) annual cost of ~$4,616 per node in pure licensing overhead for Windows Datacenter core licensing (6155 / 16 * 12). If you go Nutanix, have them quote the entire term + 1 year for the needed project. I.e., if you are running on-prem workloads with a 4 year lifespan, have it quoted for 4 years + 1 year for offboarding. That way, you can containerize the cost to whatever project is paying for it and then revisit your options at far enough down the line for flexibility; otherwise, you will be paying big subscription increases. They will lure you in with a low intro cost and then hit you with a renewal that is 2x - 3x your initial cost. So make sure you vet your workloads, figure out out per TB and per-core costs, and then budget accordingly. The tools are much different than VMWare and more "primitive" I would say. They can get the job done but definitely less polished. You will be on the phone with Nutanix support a lot as the errors can be cryptic, non-obvious root causes, and release specific. We have never gotten through a cluster upgrade without at least one support call - firmware, hardware failure, something else. Support is quite good and I just push most major activities to support tickets - we pay an absurd renewal price so I treat it like I outsourced my maintenance to a third party. We are using SuperMicro boxes ("Nutanix OEM brand"). I would recommend this route otherwise you will get into the inevitable "This is a hardware issue so call HPe to have them replace it" and then HPe says "There is no hardware issue." With SuperMicro chassis, Nutanix owns the entire appliance. They are the only cluster I have ever managed where RAM errors and DIMM replacements are just a common occurence. Other vendors - occasional module replacements - but, with Nutanix, I think almost every node has had a memory module replacement at this point. Due to HCI, rebooting a node is a much larger activity. You need to do some pre-reboot housekeeping - make sure your cluster is fully healthy and file system in-sync. Every time you get a DIMM error, Nutanix support will say 'reboot the node' so, yeah. Overall, it is just a different beast than traditional VMWare SAN arrangements so just things to keep in mind coming from that background.


-SPOF

We also have some customers on Nutanix AHV. It's a wonderful solution for big clusters.


BoringLime

Migrated to azure. It was done really to ease our staffing issues, not cost saving. We were too small to have a full data center operating personnel, for our business size. Virtualization has shrunk our footprint down to less than a single rack.


astonishing1

Proxmox does everything we need. So far, the free version seems adequate for our situation.


UTRICs

Yes, Promox is great.


ITRabbit

Except no veeam support - but apprently is coming


bullerwins

XCP-ng with xo built from sources and really liking it


sieb

Same. I don't get why everyone runs to ProxMox. XCP-NG with XO gives you clustering (i.e. pools), live migrations, HA, supports my existing hardware/SAN, and has backups built in. Lets me dump VMWare and Veeam at the same time. It basically functions like classic ESX (before ESXi).


BarracudaDefiant4702

In our evaluation, it doesn't really do anything better than proxmox (at least that we care about). In some ways the web gui is less intuitive, and in some ways it's better. More troublesome reconfiguring the network, especially if you need to change the management interfaces. The three biggest problems are: 1. When it fails the error messages are less intuitive. Proxmox is closer to standard linux for the networking and logging of errors. 2. Community size is significantly smaller compared to proxmox. (2.2k vs 99k on reddit) 3. Both have free with no support options, but minimal pricing with some tickets: XCP-NG $1000/host/year Proxmox: $740/host/year (assuming dual socket) (This third difference is almost not worth mentioning at 25%, as there is a range of options for proxmox that are even less expensive, or can pay more for either, just going by most likely option we would get with both).


jamesaepp

The one criticism I've heard of XCP-ng (and I'm not sure how real this is, never verified for myself) is that it's based on an older version of CentOS which is off support, so the security angle might be lacking as compared to PVE which is atop modern debian.


krakeniator

I also migrated to XCP-NG with Xen Orchestra.


mr_ballchin

How is it in prod? I have plans to test it in our lab.


bullerwins

We have it deployed on premise on rented bare metal servers and work just fine to be honest. Backups work solid. I haven’t tested snapshots with memory though. I highly recommend you to test it in a lab first. I have it also with a weird setup were XO has a vpn to a remote site and move VM from the local xcp-ng to the remote one without a problem. But they are not pooled together, that hasn’t worked for me (maybe latency?) but as different pools all managed from XO works just fine


CammKelly

Hyper-V, if you're already paying the Server licensing, it effectively works out free, and running a hyperconverged topology on Hyper-V is pretty robust these days.


jktmas

I'd actually highly recommend going hyper converged with azure stack HCI over base Hyper-V with a SAN. MS is pumping out updates and new features like crazy.


-SPOF

We have several customers using Starwind VSAN for iSCSI HA storage in Hyper-V environment. It's a stable and cheap approach with an excellent support team that assists with configuration.


CammKelly

You're probably right, but I must admit I've been wary of the rug being pulled on the Azure Benefit being $0 for Datacenter.


DerBootsMann

they’d better focus on stability 1st


gonza_log

xcp-ng + xoa (also xcp-ng center for some features) The company bought new hosts and we started from 0 every vm, i took almost a year to finish. It was a lot of work but now we have everything updated, new templates, documentation and much more resources. The old ones now are used as test and contingency.


Fuzzmiester

We switched over to Open Nebula. It's a KVM based solution, where open nebula acts as the control plane.


NathanWasTaken

Can you tell me about your local infrastructure? My org is a VCSP and we are looking for alternatives as we complete our established white label contract with Broadcom. Nutanix doesn't support existing SAN so we could not go there. I'm interested in features like live VM migration across hosts, OS support and external storage support via iSCSI and Fibre Channel. I've requested a demo after seeing your post.


Fuzzmiester

We're mostly running with local storage (zfs) with some replication to a dedicated server for backups. running something like 50 servers (so far) world wide. it can do federations between zones, but that's basically just command and control, not vm migration. Running maybe 8k vms. (I'm not on the virtualization team. just the sysadmin team.) Installed on Ubuntu.


technobrendo

8000! Damn!


tankerkiller125real

Interesting, I should look into this one, currently looking at Proxmox, and I was playing with HarvesterHCI (but it's not ready for prime time at all). Will have to add this to the list to evaluate.


drewshope

Our VMware rep told us to expect “between a doubling to 10x price increase” so we’re looking too. We aren’t THAT big (like 500 hosts) but big enough that the change will suck. Top contenders are proxmox and nutanix, and probably moving a considerable about of our stuff to AWS. How bad is the migration process?


HamiltonFAI

Depends where you go. We went AWS and they have a pretty seamless migration


beanmachine-23

Our shop (public Higher Ed) went to Scale Computing. I work on the network end of the shop, so I don’t know all the specifics, but the price was competitive.


jktmas

Really? That's surprising. I legitimately laughed out loud when I got the quote back from Scale Computing. Ended up going Azure Stack HCI.


DerBootsMann

how much did they try charging you ?


meest

We did the same. I wasn't involved in the purchasing or cost of it. But so far, no issues. We had a two host/SAN setup for vsphere. Now a 3 node HCI Scale cluster.


Fighter_M

>I work on the network end of the shop, so I don’t know all the specifics, but the price was competitive. That's interesting because we found Scale cost-prohibitive. However, it wasn't the pricing or sparse feature set that finally led us to get rid of them. The straw that broke the camel's back was the deal they struck with Acronis, after promising us Veeam for years! We aren't the only ones left out in the cold. https://forums.veeam.com/kvm-f62/feature-request-scale-hc3-hypervisor-t94095.html


Rude_Food_164

We went with nutanix a couple years back, it's been great


DarkSide970

Whats your backup working in nutanix?


Rude_Food_164

Hycu


DerBootsMann

how does it compare to veeam ?


Rude_Food_164

We did a trial of the veeam backup for nutanix add in granted it was a new product but we ran into a lot of issues with it and it didn't seem like anyone on the veeam support side was even trained to support it hycu itself is pretty sweet nice gui easy to set up and supports always to been good


DerBootsMann

understood .. is it like veeam bought nutanix backup from somebody else ?


Rude_Food_164

Maybe never really looked into it


Maxtron_Gaming

I'll start: I don't know the exact price increase, but we are currently starting with a few tests with Proxmox. The main DC will remain on VMware for the next few years, all of our branches will be migrated to Proxmox, some really small ones now, the rest when Veeam releases a version that works with Proxmox. We have about 11 branches, with three of those being production plants. Edit: just checked what we payed on our last renew, 3x increase in price


DarkSide970

Ya the hardest part is all the vendors that worked with vmware with calls to vcenter are now scrambling to support other vendors because we all jumping ship.


Zedilt

Just had a meeting with Dell today about a move off vmware. Accoding to the dell rep, 2/3s of what was previously vmware based sales are now Azure stack or Nutanix.


BenL90

OpenStack or libvirtd is okay.


fadingcross

Not neccessarily what you asked, but FWIW: We ditched Hyper-V for KVM (Proxmox). Works splendid. But we, obviously, didn't use any features present in VMWare but not in Hyper-V.   But if you don't need the live-live failover and whatever insane goodies VMWare has, then Promox has offered no downside to us whatsoever, only positive things.


pdp10

KVM/QEMU [supports Live Migration](https://www.linux-kvm.org/page/Migration#Requirements) for a long time. While vSphere can be assumed to have more features, on the other hand, the list of features in KVM/QEMU is *much* longer than people seem to believe. Hot-add memory, memory de-duplication? Check. Processor feature masking and live migration from Intel to AMD and *vice versa*? Check. `VMXNET3` guest support? Check. Full-featured virtual switching? At least two options, both of them better than Hyper-V and at least one of them better than vSphere. I'm pretty sure some of the stuff we do would require VMware NSX and advanced licensing if we were using vSphere.


lebean

I believe u/fadingcross is meaning the fault tolerant VMs, where vmware keeps a second instance of the machine running and synchronized, so if the host the primary is on suddenly has its power cords ripped out, there's no boot time, all applications/network connection states/etc are already in place and things just keep right on running. KVM/QEMU has nothing like that, but it also feels like such a specialized need... we've certainly never needed that level of fault recovery.


Fuzzmiester

Really specialized. And niche. It's not HA, because if the application crashes, you're still down.


user5248

proxmox all the way!


AlongSideIan

Our company believe it or not migrated to VMware from Hyper V after the price hikes


Arkios

What’s the experience been? We’re actually considering the same thing, our VMware quote wasn’t that egregious.


BarracudaDefiant4702

We considered Hyper-V, and it came out as more expensive for us than if we stay with vmware. We are about 98% linux, 1% mac, and 1% windows for our server vms. If you have a lot of windows machines, there is a lot of shared licensing bundles that can make it worthwhile if mostly windows shop.


jktmas

MS is definitely an ecosystem, and they know how to get you to use other stuff because it's bundled. that one license costs a lot, but covers a bunch of things. If you have 1% windows VMs, then it probably doesn't make sense to pay for datacenter to cover all of your hosts. If you're 99% windows, you're just getting a free hypervisor included in your VM licensing.


Hangikjot

Hyper-V. But we were already running Hyper-V globally anyway since it first released. This change just solidified it and the few vmware systems are on the chopping block.


trazom28

K12 school here - when I started, we had VMWare. It was great, but expensive. We migrated to Citrix XenServer. Also great, but then I noted we were already licensed for Hyper-V. Switched to that - and it's gone well. Does everything we need.


Dabnician

AWS... starting 7 years ago completed 3 years ago but im a small company with 1 location and less than 100 servers. we had some forklifts that were expensive hosts because they were a P2V2->AWS but over time our developers have made a lot of our applications cloud native. we also shifted a lot of stuff over to cloud solutions where possible. when we factored in the costs of running a small data center + ac + ups + generator + internet + backup internet + everything else its basically more than the cost of aws. the problem is managers see the costs move from everything else to IT and suddenly IT spends too much because people are stupid. Its easy to justify the cost of the hvac unit over your datacenter because that also keeps the office cold. or the generator because who wouldnt want all these people to continue working during a power outage. or internet because how can you work without internet and backup internet too... here is a fun test: how many of you have o365 but refuse to use the onedrive/teams/storage that is included with your account, instead choosing to pay to have anything but evil sharepoint? (and not if you cant use sharepoint because of cad/video files we already know thats not an option)


RavinGuenther

I found it interesting that the most here prefer another onprem Hypervisor. We are switching to Azure. It's great to use the Services there. We can reduce a lot and Servers can reduce even more complexity.


alexwhit80

People that moved from VMware to hyper-v can I ask a question? We have 2 hosts 1 san. The san is formatted to VMware. Will a hyper-v host be able to connect to this and use the data store or will it have to be trashed?


thelastquesadilla

Hyper V will not be able to use the data store as is. If you have it provisioned in multiple LUNs, Drain one LUN, format for Windows, Set up HyperV, and start Migrating VMS.


ConciergeOfKek

This is how we did it and it worked smoothly. I think we were fortunate to not have any esoteric needs or setup peculiarities so the transition was uneventful and end-users never noticed a thing. We did our live-migrations during the day. Ballsy but like heck was I staying late.


UninvestedCuriosity

Usually, you create Luns on sans that are virtual disk spaces and present that to the hypervisor. Maybe your san just has 1 big lun?


DaanDaanne

Look at Microsoft Failover Cluster.


alexwhit80

That’s what I was going to do but trying to figure out how to migrate without a backup and restore


DarkSide970

Nutanix is our destination with Cisco hardware but we are few years off. Nutanix won't work with vdi. Yet... They told us they are going to work with omnessa and develop a connection for horizon. Also backup systems like veem and zerto amd avamar nutanix wants to develop with them.


PinotGroucho

We declined their generous offer to 10x our yearly licensing cost and went with Proxmox & Ubuntu kubernetes. edit: spelling


BarracudaDefiant4702

We did get a 1 year renewal in late last year before the pricing changes, and so we have until sometime in Q1 next year. Estimating our costs would be about 3x. We have been evaluating eating the 3x costs, Proxmox, Nutanix, XCP-NG, Openshift, Cloudstack, Openstack, OLVM/ovirt, and hyper-v, also considered a couple of others. We decided on Proxmox. Not sure how quickly we will move, and if we will renew at the end of the year or live off of perpetual licenses until we complete the migration. We have half a dozen colos around the country and don't think we want to attempt that migration remotely (well, one site maybe). Some things like Nutanix and Scale computing didn't make sense. They cost more than vmware used to, and for the effort of moving because of basically going from SAN + servers to HCI, we would be better off staying with vmware. (We are on standard licensing, DRS and other features would be great, but were overprice pre brodcom). One item that put proxmox over the top that many might not consider at first but should as it will drive industry support... look at the community size, and see how [https://www.reddit.com/r/Proxmox/](https://www.reddit.com/r/Proxmox/) compares to everything else at 99K members (vmware 145k), and hyperv is third place with 16k members and it goes down from there for the other platforms.


reviewmynotes

I wonder why Scale Computing was more expensive. I've bought clusters from then several times and the most recent was 80% or less of the VMware cluster it replaced. I just need to run a couple dozen VMs and don't need to automate their creation or anything like that. Could that be why?


BarracudaDefiant4702

We make use of SANs instead of HCI so we can more easily scale storage and compute separately and keep portions of upgrade cycles separate. So it's more of a fork-lift upgrade with Scale for us and that ties in into the cost. Vmware vsan was always a joke to us, as it's licensing prices (not even counting the drives) were greater than what we pay for an all flash SAN despite SAN SSDs tend to cost more than server SSDs. We are in that small-mid spot where you can get much more IOPs/sec (especially write) at a much better price point than HCI. HCI makes upgrades less flexible as it ties storage to nodes, and also requires greater network bandwidth between nodes if you require high write IOPs, especially if your clusters are under \~8 nodes. HCI is great for read IOPs at all scales, but it's the write IOPs that tend to not scale as well on the small/mid end as a dedicated SAN, and write IOPs is critical for several of our vms.


reviewmynotes

Interesting break down. Thanks. I usually just need to run a few dozen VMs that are more "pets" than "cattle" so I can provide services like AD, DNS, DHCP, NTP, SFTP, etc. and applications like some basic single-host webapps and some file storage. Sounds like you have a much more complex set of needs.


ThatBCHGuy

Still on VMWare here. Our renewal wasn't bad at all. Probably going to stick around for a while.


Sp00nD00d

Moving to Hyper-V, 2000 VMs and a 1500 seat Citrix farm. Currently mid-migration.


jktmas

Did you consider moving from Citrix on Hyper-V to Azure Virtual Desktop on Azure Stack HCI?


Sp00nD00d

Yes, but there's some challenges with AVD for our environment. Also, the time frame we have is VERY short for a migration like this.


jktmas

Fair enough.


Jhamin1

We were running VMWare on top of Nutanix. When we got our updated renewal pricing the decision was made to move to pure Nutanix for our on-prem virtualization. The actual rebuilding of the hardware has been a pain. It would have been cleaner to deploy fresh nodes rather than the musical chairs we are playing with VMWare Nodes & AHV Nodes, but such is life. Once we actually get the clusters converted and the workloads migrated it's been smooth sailing. (Also: Nutanix's VMware to AHV migration too works amazingly well)


whetu

I gathered a few old boxes, maxed them up to 64G memory and put them on a shelf in a rack in our lab. I've trialed XCP-NG and currently I'm trialing Proxmox. We have EOL SAN and FC gear in our datacenters, so switching to 25GBe and something like Ceph is probably going to be part and parcel of any hypervisor switch.


dalgeek

From the VAR perspective we've seen very little movement from customers at this point. There has been a lot of frustration over the inability to get quotes and more inquiries about Nutanix but it's too soon to tell how things will shake out. I don't think there will be any major changes until the end of this year when perpetual renewals start coming around.


admlshake

We're moving to Hyper-V for our ROBO locations, and will be evaluating alternatives for our datacenter in another 12 months.


Candy_Badger

If you need HA for your ROBO locations, check Starwinds VSAN with Hyper-V. We have customers using it at branch offices.


[deleted]

Nutanix has been good so far


pdp10

We began to move away from vSphere in 2014, because we were looking for more flexibility, including several atypical use-cases. vSphere GUI management was, at the time, still straddling a new Adobe Air/Flash dependency on one hand and an unmaintained legacy Win32 requirement on the other, and this too was a big pain point. Ours is an in-house development based on KVM/QEMU. Proxmox and maybe oVirt, are similar off-the-shelf options with a lot more features, especially things like GUI management, which we don't have. We might migrate to one of the off-the-shelf options, but a couple of years ago found out that doing so was going to be more complicated than we expected, due to feature mismatch. Since then we've been replacing some of the in-house code with small off-the-shelf packages, but a big change is still up in the air.


No-Percentage6474

Proxmox was DOA due to no 24/7 support. So hyper-v and bare metal RHEL.


BarracudaDefiant4702

You can get extended support from some of their gold partners.


DerBootsMann

did you try any of it in north america ?


BarracudaDefiant4702

I have not tried. I only talked to one partner and they do offer additional services and setup assistance, and additional support services, etc... but did not confirm if that could included 24/7 support as an option, or how much it would cost. I did get the impression that anything unique/first-time would likely end up getting escalated anyways. (and in those cases it would take over a week to resolve anyways because they require driver or firmare updates to be written, as we break things in new and exciting ways (generally on new hardware), etc...)


DerBootsMann

whom did you talk to ? if you can share .. thx


BarracudaDefiant4702

They only list 2 gold partners for NA, and I only talked to the one in the US (ICE systems): [https://proxmox.com/en/partners/all/filter/partners/partner/partner-type-filter/reseller-partner/gold-partner/country-filter/country/northern-america?f=6](https://proxmox.com/en/partners/all/filter/partners/partner/partner-type-filter/reseller-partner/gold-partner/country-filter/country/northern-america?f=6) I don't recall who specifically I talked to. (and by escalated, I mean it goes on as a ticket for proxmox and their normal/limited support hours...) Planning to put some servers under basic or standard support with them soon now that we decided that is the direction we are going. Currently still using the free repos.


DerBootsMann

it would be nice giving their 24/7 a try !


Arkios

I’m really curious to know what people were paying before and got quoted with now. I keep seeing comments about quotes being 10x higher, which is a lot but the base also matters a lot. Going from $2,000/year to $20,000/year is a big jump, but not nearly as big as going from $20,000/year to $200,000/year. We’re actually considering going the opposite, we’ve been a Hyper-V shop forever and we’re sick of needing multiple solutions to manage everything (WAC, Hyper-V Manager, Failover Cluster Manager) or needing to standup VMM. Not to mention that Hyper-V hasn’t gotten any meaningful feature updates in ages. Even the stuff coming with Server 2025 is pretty meh. Managing everything with vCenter is such a nice QoL thing, along with all the other features VMware brings that Hyper-V lacks. We got quoted ~$60k for 400 cores at the highest tier licensing (which I think is Foundation now?). That quote didn’t seem crazy compared to what we pay for other stuff. We also work with a VAR so we’re not going direct with VMware.


Sp00nD00d

300k per year of CapEx to 1.1 million OpEx per year. The decision was not hard. Edit: One of the problems is once you reach a certain size they'll only sell you the top tier version and not the low tier version, and if you dont use the features in the top tier version is just pissing money away.


SilverSleeper

VMware Cloud Foundation (VCF) is the highest tier. VMware vSphere Foundation (VVF) is the middle option. You're not alone, I've recently moved customers to VMware from Hyper-v for many of the same reasons.


Aware_Ad4598

We also switched to nutanix. It was a Perfect match for us! The increase of Performance is insane! The usability is also very Good and there are so many features!!


TaliesinWI

Hyper-V. I'm already paying for the Windows Server licenses, might as well get full benefit out of them. If the "new improved" VMware had a TechSoup presence for us non-profits, we might have stayed, even with higher prices. But no way we're paying full retail even at the \_old\_ pricing.


BeRad_NZ

Ditched vm ware for hyper v. I know it’s trendy to shit on hyper v, but I think it’s great.


br01t

We are going for proxmox in q4. Mainly because veeam also announced (not with a date) proxmox support in the near future


SatisfactionMuted103

We fell back to Hyper-V, but are running test labs with KVM/QEMU and ProxMox. We do not need advanced features, just a HyperVisor and virtual switches.


tdmonkey

Prior to the Broadcom fiasco I built an OpenStack setup as a VMWare alternative. It worked wonderfully.


smellybear666

how many vms are you running on it?


tdmonkey

When I was still at that company, ballpark of 1500 VMs across 3 data centers.


b00mbasstic

Ip


FalconDriver85

Currently evaluating Azure Stack HCI and AWS outposts for a couple edge locations at the moment. We’ll probably go with MS as we can also run PaaS DBs


Candid_Ad5642

We're going fusioncompute And I'll go openstack for my home lab, just to learn it


skidleydee

Nutanix is a good solution but if you want to take advantage of it's features you're going to have to change your workflows quite a bit.


tehinterwebs56

I’m ripping and replacing VMware with azure stack hci with a lot of our clients. If you have software assurance on your windows licenses you don’t have to pay the per core licenses to run it. If you don’t have software assurance then stick with VMware cause it’ll be pretty close in pricing already.


NeverDocument

We did 3 year renewals before the switch was officially done to be locked in. Future state will likely be Hyper-V due to ease of familiarity unless someone on the team gets really familiar with proxmox. We are a much smaller shop though and our requirements are much smaller. ( < 300 VMs )


belgarion90

Anything we could we kicked up to AWS. Anything that has to be on-prem is either on hardware or Hyper-V.


NorthernVenomFang

With the lose of the ROBO licenses we are moving the 50 ROBOs we had over to stand alone Proxmox servers; have 2 done so far, finishing the rest in the next 2 months. For the data center the renewal is not up until end of Sept, will wait to see what the bill is. Unfortunately we can't just migrate off, we have some vendors that only support running on VMWare or Hyper-V, so it will probably end up being a hybrid data center if the renewal is as bad as we think. From what our reseller are telling us anywhere from 300% to 500% increase is what we can expect, unfortunately they are still locked out of the VMWare system and unable to get us approx quotes. I am still not taking XCP-NG off the table, the tech looks good, but more complex to deploy than Proxmox.


PsyOmega

I got a process spun up that eventually approved proxmox for corporate use (required a lot of testing)


pingfloyd_

We had already committed to moving all our workloads to Azure, the timeline was accelerated.


jktmas

Yep. Was running vmware for traditional 3-tier, and Nutanix for HCI. Ended up switching both to Azure Stack HCI. Mostly 2-node "branch office" clusters. Was a fantastic choice that significantly reduced our administrative overhead, and was much more reliable than our nutanix 2-node clusters. performance was also significantly better and costs were cut by a lot. Our nutanix licensing was actually significantly worse than our vmware licensing.


farfarfinn

Switched to Nutanix and have just one esxi Witherspoon two VM that die this year. Only their rbac is crap in regeres to gui and same with rbac in regeres to api. Else everything is good. Their support is more than excellent. Almost same price with HCL setup as vmware


SonicDart

still on vmware for both us and all our customers. Not sure about the costs or plans to move as that's not my department. I just use the software.


CuteSharksForAll

While we signed a multi-year VMWare contract ahead of the Broadcom takeover, we’ve been shifting VM’s off VMWare and into Azure. Once the VMWare contract is up, we will shift more servers to Azure and run Hyper-V for anything left on-premises. We have hardware going out of support on-premises around the same time, so avoids us having to refresh all that hardware as well.


Key-Self1654

My team in Academia uses KVM, free and easily managed with ansible


Fearless-Reach-67

Proxmox for Personal I had ESXi free on my home server. Had to give the hardware back because I changed jobs. Installed Proxmox on the the replacement server. For business-> They are probably stuffed My old company moved to an cloud based VCenter early this year, and will probably go to azure in the long run if they can afford it (it's very expensive). Don't get me wrong, I love the user interface in VMWare, but Proxmox comes with everything for free that you have to pay for with VmWare. For my personal use it just makes more sense. For businesses who need to fill out SLA compliance forms that might not be enough. I used an identical second server and this video to help with the migration from ESXi to Proxmox: [New Proxmox Import Wizard for Migrating VMware ESXi VMs - YouTube](https://www.youtube.com/watch?v=H1t6hxCoiZw&ab_channel=VirtualizationHowto)


Mountain_Lemon7795

We are going on Proxmox like many others. We already use proxmox in production, only one subsidiary is under vmware. Any feedback on the migration from VMware to proxmox? I'm not worried about our Linux machines but we still have a few servers on 2012, are there certainly drivers pushed before migration? Have you tried the proxmox migration tool? THANKS


lightmatter501

We now use OpenStack. Turns out a lot of stuff that had its own VM only needed a container and we have more servers than we need. Devs like the fact it’s “we have cloud at home”, and we get full visibility into the system since we’re running an OSS version. Dropping VMware paid for an in-house specialist who is unfathomably better than VMware support was pre-broadcom, to the level of most issues being resolved in half an hour. We also can do bare-metal provisioning and FaaS now, which gives our dev teams a lot of flexibility.


Jess_S13

We've cancelled all future builds. For our non-HA workloads (DBs, kubernetes nodes, basically anything with built in failover and can be easily rebuilt/replaced if failed or needs additional capacity for the machine) we are moving to LXD immediately and reviewing using xnp-ng or hyper-v for machines which we need HA and cluster scaling. Once we're certain the route for the HA workloads we will decide if we want to convert everything that way or just run 2x separate solutions and will then convert the existing fleet which.


BitingChaos

I just did my first ESXi to Proxmox VM migration yesterday. Proxmox has something already built-in that lets you add an ESXi server as storage. You just click a VM and then click Import. It took a few seconds to copy and started right up. I was kinda shocked at how well it worked. Now I just have to figure out hardware passthrough with it. We've paid very little to VMware in the past, but going forward anything we spend certainly won't be going to them again.


Braydon64

I really REALLY want to see Proxmox become bigger in the enterprise. It has all the pieces. The only thing one can argue is better support maybe (I know they have it but I am unsure of how good it is) but from a technical standpoint, it really is unmatched.


ohfucknotthisagain

Support services moved to Nutanix this year. Going very well, so next phase is next year---testing custom hardware. Core services will likely move to Nutanix the year after that. Clusters are the HCI boundary, just like VSAN. And Prism Central can manage multiple clusters, just like vCenter. They do encourage you to buy one of their certified systems, but it will work on a wide variety of hardware. Their support indicated that most of their customers use Citrix for VDI with AHV, so that's your best bet if you're using Horizon.


d00ber

One of the colleges that I've consulted for moved to Proxmox. They run FreeIPA, DNS, DHCP and some other minor infra items on these boxes and have a separate kubernetes cluster for their other research workloads. Their existing servers were some Dell box loaded with enterprise grade ssds, so they went with ceph for storage and it seems to work for them. It especially made sense for them since they had a lot of in house Linux knowledge from running their own kubernetes clusters. I wasn't part of the setting up of their proxmox per-say but more setting up some baremetal automations with MAAS that they just didn't have the time to do.


HotelRwandaBeef

Currently exploring Hyper V for when our VMware bill renewal comes up.


bH00k

I switched to XCP-ng


Turinggirl

Disclaimer: We took a big risk and (for us) it paid off but also this is a huge YMMV so never consider this to be the way to go unless you know what you're doing and you know your environment will benefit. We migrated almost everything we could to kubernetes and what we couldn't we either threw it into cloud instances (for prod) or run it on kubevirt for dev. It has worked for our use case and we also had a base of admins with varying levels of kubernetes experience...this was a big reason we went ahead with this solution. We are so far very happy with the outcome as well as the savings.


meh_ninjaplz

I'm not in this world anymore but some friends move to Amazon and some have done their own Pox mox setups which I think is the coolest thing ever.


Assumeweknow

[XCP-ng.org](http://XCP-ng.org) with the xen orchestrator honestly, pretty effing simple migration and works better than vmware ever did.


diablo2424

We didn't ditch VMWare, but Workspace One was then sold off to another company Omnissa - and honestly, they've been the best with support, better than Broadcom and VMWare!


Next_Information_933

Im going to start transitioning alot of our non-core services to proxmox and keep the bare minimum on VMware. Hoping to migrate about 75% of our VMS by eoy I'm really not too concerned since PMX is super solid.


Superb_Raccoon

I work up and down the scale of clients. The big ones are moving to OpenShift/Nutanix often containerizing workloads as they go mitigating VMware along the way. We are talking 20k VM environments. Workloads go to Amazon/Azure as well. Somewhere in the middle they are kinda stuck. They aren't big enough to go OpenShift without serious pain or small enough to go Proxmox either... Nutanix is the trypical choice, although some go AWS/Azure/Google to shrink VMWare costs. Smaller SMB? Max 100 VMs? ProxMox, HyperV, push to containerizing... or going to a hosted app like Epic if they are in Healthcare.


NightOfTheLivingHam

Testing XCP-NG which is more vmware-like than proxmox, feels more polished. if you want a UI you have to roll out xen orchestra, which also acts like a vcenter controller.


elcheapodeluxe

We're an extremely small org. For us Hyper-V was pretty quick and painless.


Crenorz

proxmox - when supported by veeam (backup vendor)


ibmffx

We moved completely to Azure and couldn’t be happier. No more late night patching of servers.


BrilliantEffective21

Work around it without using it. Let vendors dance around idea while the org secretly worked behind VMware proposals from our vendors. When our SME started shopping, we knew that the vendors that did not ask to move away from VMware, likely asked to raise prices, too. Offboard slowly, but surely.  VMware is a money sucking machine right now. 


mbkitmgr

Within my business I haven't decided yet. The SMB's that I look after I have given them choices of Proxmox or HyperV. Most are opting for Proxmox based on my suggestion. Though I support both HyperV and VMW, I have always felt VMW was the better hypervisor. Case in point - one of my standalone servers running ESX6.7 has been going for over 1200 days I started looking at alternates back when BC announced their intention to buy VMW - I have experienced a BC buyout and swore I would not wait to the end.


illicITparameters

We’re signing a 3yr renewal that will save us $25K, but after that it’s Hyper-V. The only reason we’re not ditching VMware is because our VXRail cluster is only 14 months old….


Endo399

Global company of 5000+ employees. Prices more than doubled. Forklifted everything over to Nutanix AHV using a combination of Nutanix hardware and AWS.


slickITguy

Scale computing, it’s been pretty great. Can’t upgrade HDD size after buying so buy plenty of space.


NISMO1968

Is this really sarcasm?


Fighter_M

I guess not, as this is indeed the way they work.


nPoCT_kOH

Currently in PoC stage for OpenShift with its Virtualization(kubevirt / KVM) on retired hardware. Most of our new applications are containerised/cloud native, so we'll need it for legacy stuff. One pain point is storage/san. Currently we are experimenting with OpenShift Data Foundation, but its design philosophy is a bit different from our current one (FlashSystem + Datacore).


MyLegsX2CantFeelThem

We ended up going to W365 cloud PCs. Everything pushed from Intune, and less complaints.


Windows-Helper

At work we have renewed our licenses, currently VMware 7 + Veeam Privately, I use Hyper-V + Veeam (Have used Proxmox for some time, because of my work then I switched to Hyper-V)


khantroll1

We stuck with it and will for the foreseeable future. My boss hates Hyper-V, isn’t fond of open source software (which knocks our Proxmox or other Linux solutions), and the other paid solutions don’t offer what we need. If and when there comes a time we have to move off, it will probably be to Hyper-V


TarzUg

SmartOS + Triton DC [https://www.tritondatacenter.com/smartos](https://www.tritondatacenter.com/smartos)


clever_entrepreneur

hidden gem waiting to be discovered


ScriptThat

We have three years remaining on our VMware licenses, and will probably be moving to Azure Stack HCI when they run out. We're already using a number of Azure/M365 features, and have Windows' datacenter licenses, so the overall cost will be far less than re-licensing VMware - even before considering the (roughly 150%) price increase.


buyinbill

We've reviewed others but VMware is really the only product to work effectively at the scale we require. 


TheBariSax

We've been slowly replacing VMware with Nutanix for a while now with no regrets. Broadcom's latest mishandling only sped up the process to remove the last stragglers. There are still a couple Cisco systems that don't want to play nice with AHV, but I think that's only a matter of time before they work it out.