Urban75 Home About Offline BrixtonBuzz Contact

The lonely tech post thread.

The business and the boss is really keen on this

Ugh

Other than it looks like it's meant to turn you into a genius, I don't think I really understood a word of that.
 
Depends on where the cert is. Purely internal things get a purely internal cert. Distributing our internal Certificate Authority is a part of the build process for everything.
Externally, I would ideally use Let's Encrypt but we're a University and we're supposed to use JISC's service. I look after 150 or so certs and keep everything in a spreadsheet, with the actual grunt work around it put into scripts. It's a right PITA, but I'm stuck with our chosen provider. Every time an external resource asks if they can just use LE instead of getting a cert from us, it's "Yes, please!".
Let's Encrypt is a great service but has it's limitations. The main one being that they expire after 90 days.
 
The business and the boss is really keen on this

Ugh


I heard a blog about this. On the face of it sounds interesting, but it might just end up being the next clippy (or something really shit like Viva).
 
Depends on where the cert is. Purely internal things get a purely internal cert. Distributing our internal Certificate Authority is a part of the build process for everything.
Externally, I would ideally use Let's Encrypt but we're a University and we're supposed to use JISC's service. I look after 150 or so certs and keep everything in a spreadsheet, with the actual grunt work around it put into scripts. It's a right PITA, but I'm stuck with our chosen provider. Every time an external resource asks if they can just use LE instead of getting a cert from us, it's "Yes, please!".

This is another thing I need to get my ahead around. I don't think we really bother with internal certs much and I've not touched the AD element.
 
Two hours late finishing work because I couldn't get fucking autopilot to play nice. :(

I'm not even sure why we use it for this client. They aren't exactly big and don't get new machines that often and I normally log in to finish off, which totally defeats the point. Couldn't even get Intune and Company Portal to do it's thing until I'd wiped it and removed everything from the various admin panels. And Intune feels really slow when your already past your going home time.
 
Some very cheap prices on NVMe drives on Amazon today. Think I might pick up a 1TB Crucial P3 plus Gen 4 for my homelab server, they are down to £38. :)
 
Some very cheap prices on NVMe drives on Amazon today. Think I might pick up a 1TB Crucial P3 plus Gen 4 for my homelab server, they are down to £38. :)
Personally, I would pay the bit extra for the SN770. I don't think much of QLC disk in terms of sustained write speed or longevity. The worst case write speed on the P3+ is well worse than any spinning disk.
It's probably more suited to a laptop than server or workstation.
 
Personally, I would pay the bit extra for the SN770. I don't think much of QLC disk in terms of sustained write speed or longevity. The worst case write speed on the P3+ is well worse than any spinning disk.
It's probably more suited to a laptop than server or workstation.

Perfect, thank you. I'll be doing a lot of OS installs as I build VMs so that will be handy. :)
 
Well that went better then I feared. I bought Sabrent NVMe to PCIe adapter which was also discounted to a tenner. Ideally I'd have got one which took 2 or even 4 drives, but I couldn't find any reviews for the budget ones on Amazon, so thought I'd play it safe. Feels a really solid bit of kit, big heatsink and a different sized thermal pads to bridge the gap in the enclousure. As I was opening the Z640 I had flashbacks to how hard it was to get a graphics card it would boot with, but it's picked it up and I've now got it as a datastore. I've got two PCIe x16 slots on this motherboard, but assume it would have been fine in a x8 or even x4 slot?


1689248998337.png

1689248978324.png
 
I'm trying to work out how to benchmark the disk from vSphere. Obviously running crystal disk in a VM isn't going to be accurate.

1689252268871.png
 
Well that went better then I feared. I bought Sabrent NVMe to PCIe adapter which was also discounted to a tenner. Ideally I'd have got one which took 2 or even 4 drives, but I couldn't find any reviews for the budget ones on Amazon, so thought I'd play it safe. Feels a really solid bit of kit, big heatsink and a different sized thermal pads to bridge the gap in the enclousure. As I was opening the Z640 I had flashbacks to how hard it was to get a graphics card it would boot with, but it's picked it up and I've now got it as a datastore. I've got two PCIe x16 slots on this motherboard, but assume it would have been fine in a x8 or even x4 slot?


View attachment 383058

View attachment 383057
The NVMe standard is PCIe x4. Anything more is wasted, but won't hurt it any.
It's best to look what generation the slots are, as well. You're getting Gen.3 speeds there, but if you have a Gen.4 slot spare it should go even faster.
If the Z640 you're talking about is the HP workstation, you wouldn't want to move it to the x4 slot because it's only Gen.2 (cheap fuckers)
 
Oh that's interesting. So if I wanted to add an additional one to my gaming PC (I've used both slots on the motherboard), I could just use x4. It does look like the HP doesn't go faster then Gen 3 on the x16, so that's probably as good as I'm going to get. I'm actually surprised the benchmark was accurate at all running in a VM, but it's the only one on disk so far. Ah well, it's still not shabby and probably faster then anything we use at work.

More frustratingly the fans are spinning faster since I've installed it. It's gone from almost silent to noticeable and it doesn't sit to far from me. All I've found so far is a single post from the HP forums with much fucking around in the BIOS which I'm not sure is relevant :mad:
 
Basically it's only ever going to use 4 lanes, but you want those lanes to be the fastest they can be. That disk should get 5+GB/sec on Gen.4 lanes, but it doesn't look like you have that. You're future-proofed on the disk side, at least!

The new Gen.5 disks are obscenely fast, but I don't think anyone can actually use speeds over 10GB/sec. When I bought a new board recently, I went for 3 Gen.4 slots over 1 Gen.5 and 1 Gen.4. There's really very little in existence that can make full use of PCIe Gen.5. Even with the Gen.4 slots, it's mainly the disks - graphics cards don't even max out a x16 Gen.4 slot.
 
Basically it's only ever going to use 4 lanes, but you want those lanes to be the fastest they can be. That disk should get 5+GB/sec on Gen.4 lanes, but it doesn't look like you have that. You're future-proofed on the disk side, at least!

The new Gen.5 disks are obscenely fast, but I don't think anyone can actually use speeds over 10GB/sec. When I bought a new board recently, I went for 3 Gen.4 slots over 1 Gen.5 and 1 Gen.4. There's really very little in existence that can make full use of PCIe Gen.5. Even with the Gen.4 slots, it's mainly the disks - graphics cards don't even max out a x16 Gen.4 slot.

Realistically an old SATA SSD would have been quite fast enough for what I'm doing (a lot of our windows servers at work still use spinning rust on a SAN which is as fun as it sounds). I'd never actually paid much attention to PCIe generations, but your explanation makes a lot of sense. Thank you.

Sadly it looks like I went a bit cheap on the motherboard on my gaming PC, so I've only got 1 Gen 4 and 1 Gen 3 slot and the only PCIe slot at 4 is the x16. Not that I actually have much time to game these days anyway.
 
Well for our purposes, you can't underestimate what a vSAN of a few hundred SATA SSDs can achieve. :) I'm not even sure we'll move from SATA when we refresh next year. Throw enough disks at it and it's fast enough. Our only limitation really is being maxed out on RAM at 768GB x 12 hosts.
 
That's quite a workload. We're small fry. :D

So trying to get to bottom of this fan noise thing I think I need to poke around in BIOS. The HP was a fucker to find a GPU that it would boot with, the out my old PC that now belongs to my partner worked, but another 3 didn't. Sadly she needs that card for the 4k screen I gave her. Still I found a cheap nVidia NVS315 which enables it to boot, even though it runs headless. Went to plug a DVI cable in just now and it won't fit, the pins are wrong. Anyone know what this is? I didn't until I googled. It's fucking stupid as there is easily space on the full height card for another port. :facepalm:

1689259614027.png
So that's another adapter I need to buy.
 
D-Sub isn't it?

I've been meaning to use one of my two displays for raspberry-pi while using it as pi-hole (I'm hoping I can). I got a 2-1 HDMI switch and a D-Sub to HDMI adapter so will try it in the next couple of days.
 
D-Sub isn't it?

I've been meaning to use one of my two displays for raspberry-pi while using it as pi-hole (I'm hoping I can). I got a 2-1 HDMI switch and a D-Sub to HDMI adapter so will try it in the next couple of days.

No its a DMS 59 for driving two displays. I'd never heard of it!

With pi zero I managed to set up pi hole with no display. Set the SD card to allow SSH and then configured it remotely, so that could be an easier option?
 
I used a display to set it up but it seems to be chugging away happily now without the display. I'm hoping I can explore the raspberry pi a bit rather than getting another one by connecting it to one of my displays while it's still working as a pi-hole.
 
I used a display to set it up but it seems to be chugging away happily now without the display. I'm hoping I can explore the raspberry pi a bit rather than getting another one by connecting it to one of my displays while it's still working as a pi-hole.

You could set up some remote desktop software if you wanted to have it available to play with, but not leaving it connected the whole time?
 
Well for our purposes, you can't underestimate what a vSAN of a few hundred SATA SSDs can achieve. :) I'm not even sure we'll move from SATA when we refresh next year. Throw enough disks at it and it's fast enough. Our only limitation really is being maxed out on RAM at 768GB x 12 hosts.
WTF are you doing? Is that 768Gb of RAM installed over 12 machines, or 12 x 768Gb? Either is completely mindboggling BTW.
 
12 x 768. It does have to run 500 or so VMs. It's a VxRail cluster, so it tries hard to act like one big machine but there are limitations on how much ram/cpu a VM can have because you can't split one among hosts. Plus it has to have enough spare to be able to run the entire thing with a site down (so minus 6 hosts). To be fair, it's the background infrastructure for a Uni with 20k+ students so it's not that big. VxRail has its quirks, but it's proven to be very reliable and kept everything up when Facilities accidentally downed the power at the main site. Which was a serious brown pants moment until we realised that everything still worked.
 
12 x 768. It does have to run 500 or so VMs. It's a VxRail cluster, so it tries hard to act like one big machine but there are limitations on how much ram/cpu a VM can have because you can't split one among hosts. Plus it has to have enough spare to be able to run the entire thing with a site down (so minus 6 hosts). To be fair, it's the background infrastructure for a Uni with 20k+ students so it's not that big. VxRail has its quirks, but it's proven to be very reliable and kept everything up when Facilities accidentally downed the power at the main site. Which was a serious brown pants moment until we realised that everything still worked.

That's some pretty awesome redundancy. I've no idea what would happen if one of our hosts went down at the Zen datacenter, but it wouldn't be pretty. What I do know is it would be me dealing with the phone calls. :(

I wish my attic space didn't get so hot and I could actually put a rack of servers up there to play with stuff like this, but I'm limited to my office and I truly hate fan noise.
 
Back
Top Bottom