How to Build a Massive Off-Site Backup Server Using Old Hard Drives

Learn how to repurpose old hard drives into a large off-site backup server. The video covers hardware like a 60-bay NetApp disk shelf, software options including ZFS and Unraid, and practical tips for managing mismatched drives. It emphasizes the importance of the 3-2-1 backup rule and provides guidance on power consumption and reliability.

English Transcript:

At the end of 2025, I would have said this drive is garbage. 3 terbte WD30 EZ RX. It's just it's three terabytes. Like it's just large enough that it's like right on the borderline. And I got a bunch of these kind of drives from 3 to 12 terabytes that are basically end of life. Wouldn't it be nice if we could use them? And where we find ourselves today, uh even more than ever, we really need to press those into service. Well, there is one thing they can do and that is off-site backup. But you need something to manage that. You need to put a little project together to handle that. There is software and hardware that has to come together in order to make it practical to do this

because just shoving a bunch of hard drives in a desktop PC tower case is generally not reliable. Cabling and power delivery and everything else, it's just it's it's not a good idea. It's worth spending a few hundred bucks to get something to put them in, but what do you put them in? And what if you overspend and how does that work? this video is for you and the accompanying how-to guide to deal with that. ZFS almost supports this thanks to the heroic dev e efforts uh all the way from tech tips man down to ZFS and Claraara systems. We'll talk about all that. We're going to build some amazing stuff from garbage. This is I'm sorry it's garbage. But hey hoarders, tech

hoarders, now is your time to shine. And if you have a friend who's a tech hoarder, you should probably send them this video because you can gently encourage them to put their hoarding to good use. It's like, "Oh, yes. I have a tech horde, but let's actually use it for the good of mankind." Cuz this also means you're not paying for cloud backup services that we're going to overcharge you anyway. All right, let's get to it. It all starts down here in the scary basement where hard drives go to die. Three terabytes. We had like four or five of these. One terabyte.

I'm not going to use the one tabyte drives. I don't think one terabyte is large enough. Oo, Seagate Iron Wolf 10 TB, but it looks like it's it's been damaged. Maybe somebody dropped it and doesn't want to use it or something like that. It probably works if it's down here as opposed to in the garbage. But this hodge podge of drives is going to work great in this. This is our NetApp disc shelf. Uh this is the uh I don't even know what model this is. This is the NHJ503. It's just a 60 bay NetApp disc shelf. 60 bays is 12 drives per sled. And you don't really need any kind of mechanical fastener in here. You do a little bit for the front just for airflow reasons, but the drives just

drop and slide in. Now, pro tip, if you have heavy paper, like resume paper, a cotton, like linen bond, and you cut out sheets of paper to sit in here, that actually does help with the vibration dampening on mechanical hard drives. And I would recommend that you do that, that you lay that in here. Um, this dish shelf also technically depends on there being either a blank or a drive in the front spots here so that the airflow works correctly to keep the drives adequately cool. So, I would recommend a minimum of 12 hard drives if you're going to get a 60 bay enclosure like this. And use four bays at the front of each shelf. Now, another thing to think about is the reliability of

these drives. These are down here. They might have bad sectors. we need to examine them. And in order to do that, you're going to have to run fitness tests on them that may take days to run. So, I think that you should populate like pick a shelf. It's like the top shelf is for the sus drives and the other shelves are for drives that have completed testing. And so, you can test your drives like 12 at a time. Make sure that they're okay and then move them from the top shelf into some of the other shelves because we really do need to run fitness tests. Most of these drives, like the 10 terabyte drive or the 3 TBTE drive, they have really similar uh performance when it comes to

moving data off of them. They'll be able to move data at an average of around 200 gigabytes per second. Now, newer drives are a little faster. Older drives are a little slower, but it's not dramatically faster. It's not like NVMe where we've gone from a generation that would do 4 GB to second to 8 gigabytes per second to 16 gigabytes per second. Even some of these hard drives that are 10 years old are still on the order of about 200 megabytes per second. The fastest mechanical hard drives you can get today are only about 300 megabytes per second, about 33% faster, and it's still a relatively anemic 300 megabytes per second. It's really it's not super fast. So, it'll take a long time to test a 10

terbte drive and a three terbyte drive won't take nearly as long to test. Okay, so if we're putting all of our drives in here, how do we get it on the network? Well, this is just a disc shelf. It's not a computer. Okay, it has a computer in it, but it's not a computer for accessing the drives. It's a computer for managing the fans and the temperatures and the power supplies and everything else. It has nothing to do with the data. At the back here, we have IO modules. So, these IO modules are SAS serial attached SCSI. There's four ports. Really, it's designed to use just two of them. Two of them are to the computer and then the other two go out

to other disc shelves. So, if 60 drives is not enough and you want 120 drives, you can stack these together. That's not a problem. Each one of these connectors has four channels and each channel can do 1.2 GB per second throughput. The power supplies go in these top and bottom slots, which I took out cuz they're kind of heavy and it's hard to move this thing. But this is where the cable will go to the computer. And you could just use one and then this cable will go to your computer. Now, if you connect it like that, then you can use two connections on your computer and it's redundant. So, if this one has a problem, the other one will take over. So, you'll always stay connected to your discs. If you hook it up like this, you

can use a controller that has four slots. And then this really sort of stops being the bottleneck because you can pull information off of this thing at now 16 gigabytes per second. a little over 16 gigabytes per second actually cuz you got 8 GB per second on the top eight channels here and 8 GB per second on the bottom eight channels there. Now, some enclosures, some IO modules will support even going four and four, so you could have 16 channels into the computer. I don't think this particular shelf does that. Uh I think you're limited. It's really not even designed for 16. It's really designed for eight and eight. It's just a little bit of uh something called multipassing in Linux

that could give you up to 16 gigabytes per second. I think 8 is much more reasonable, but remember 25 GB Ethernet is only 2 1/2 gigabytes per second. And your internet connection isn't that fast anyway. So 2 and 1/2 GB per second, 25 GB is plenty fast. And if we add enough hard drives in the right geometry, we'll be able to meet that. So, let's get our garbage hard drives connected to this and let's put together an inexpensive controller machine running Linux to plug into the disc shelf. We'll set that up next. Okay, [snorts] we got the disc shelf. What are we going to do for the connection of the computers? I want to stop for just a second. [snorts] Don't splash out for the disc shelf yet.

If this is all just new world for you, this is your first video ever watching this, congratulations. Welcome to Level One Tax. you have more homework to do than is in just this video. We have a forum. There's a lot of things to discuss. If you have a desktop or a tower computer that'll hold, you know, like four, six, or eight drives and that's enough drives for what you want to do, you don't even need a disc shelf. You probably do need a SAS controller. Serial attach SCSI Mini SASS HD. Going to have to learn, you know, whole new world of connectors and cables and everything to go with them. But if you've got a relatively small number of discs, you can set it up in a desktop computer on the network, put it

together, and that'll all work fine. What kind of computer do you need? This is a garbage tier Optiplex 3020. It is literal garbage. This can be used as a server. You may read online like, "Oh, you need error correcting memory." Air correcting memory does help because when these things age, they get a little squirly and it's helpful to have as much telemetry as possible to know that things are going squirly. If you get an old retired server, typically those will have error correcting memory. It's not the end of the world if you don't have error correcting memory though. But you do have to learn some skills to try to,

you know, figure out when your machine is dying. And you also have to listen to your drives. It's like if you have a drive that's a little bit flaky or a little bit on its way out, don't keep trying to use it. There's at least four or five threads in the level one text forum every I mean probably every month, every couple of months. Seems like we've had a lot lately where folks will come in and they'll say, "Oh, my ZFS pool is throwing errors on this drive." And they just keep using it and then another drive starts throwing errors and then another drive starts throwing errors and then it's oops, all my data is gone.

Yeah, when the drive is throwing errors, maybe you shouldn't use it anymore or take it out and run some more diagnostics on it and see if you can repair it until it actually works again. A lot of these modern drives, like these WD 8 TBTE drives, they will throw unreadable uncorrectable sector errors, but they have a limited ability to remap those just like flash drives. And so you can take this out and you can run the smart long offline diagnostic on it. You can run Western Digital's utilities on it. And about half the time you can return this drive to a reasonable working condition. Now, I would not trust that with my mission critical data, but for backups, remember 321

backup rule, it might be okay as long as it's not constantly throwing errors. For what we're going to use for the disc shelf, something like this will work fine. This is a 7th generation Intel uh system. It has 32 gigs of DDR4 memory, which is kind of a lot. you would be okay with eight or 16 gigabytes of DDR4 memory. And I'm going to use this Fancy Pants Avago SAS controller. And it has the external mini SAS HD. So there's going to be cables that go in here and go out to our disc shelf. And even though this is only 16 channels, four groups of four, the disc shelf has in it hardware to let you multiplex 60 connections. So even though there's only 16 channels here, we can connect 60 hard

drives this way. It doesn't give you more bandwidth, but 60 connections. If you're doing this internal thing and you got something like this, we got eight connections here. There's two four port. It's the same kind of connectors, just the inside the computer version. And then you need a cable like this, which goes from MiniSAS HD to something on the other side. Now, it can look like SATA, but I actually recommend the SFFF8643 to 4 SF8482. The difference is that the end connector for the drives has built-in SATA power and these are both SATA and SAS compatible. Serial attached SCSI is a server class hard drive where the connection back here does not have a break. We've done videos on that in the

past. You can check that out. But these cables are compatible with both SAS and SATA drives. You can get a cheaper version of this cable that only has SATA style connections, which is not even really designed to go directly into the drive most of the time. is designed to go into a back plane. Like a server has a circuit board in it and the drives going to the circuit board and the controller connects to the circuit board. But I don't that doesn't matter for right now. It's all good. You could just put it together. Legos come to the forum if you're lost. We point you towards some more videos. It's fine.

Software. Software is the next thing that we got to talk about. So, how do we make all this go? Linux, but it can also be point-and-click. The go-to option for building a kind of garbage server like this from just a random hodgepodge of differentiz drives is unrade for ease of use and usability as far as I'm aware. Engagement challenge. What's your engagement with other NAS things? Now, there's a lot of longtime followers of the channel that'll say like, "Wait a minute, Unrade? Really?" I'm a big fan of ZFS. There is no more reliable file system thing out there than ZFS. It ticks all of the boxes for reliability.

It literally is not the it is literally nothing else except reliable. It's not the fastest. Uh it's not the most perform. Well, it's like it's just it's not the fastest across any one of a dozen scenarios that you can come up with, but it is trustworthy. It is reliable. It has a very solid architecture. Unrade is not. Unrade supports ZFS and you can use ZFS on Unrade, but when you have a mismatch of drives like this, ZFS historically has not handled that well. The other reason that I recommend Unrade is um because it has a really excellent plugin for doing drive fitness analysis. It's like a preconditioning pre-tuning plugin. So, in addition to smart diagnostics, which are commands you can send the drive to

do its own diagnostics, which are okay but not fabulous, you really need to do a full drive fitness test that's going to take days. And good software to do that is a little tricky. Even the manufacturer uh software will really try to paper over problems with the drive in order to try to get you to not send the drive in for warranty. Obviously, all of these drives that we're using are not under warranty. And so you can use like a western digital utility. Toshiba is mechanical storage like finding utility that will go with this or like our three terabyte barracuda drive. It can be a little a little challenging to find you know the right utility to go with the right drive from the right genre.

Whereas generally smart just works. And if you use a utility to just fill up the drive and look for errors and then read back the data to be sure that what you wrote is what you got is also fine. TRNAS is a good option. It's ZFSbased. It's point and click. It's all good except for the fact that we have lots of mismatched drives. ZFS is really designed for drives that uh are all the same size or roughly the same size. Like I wouldn't feel too bad about it if you had a mix of like eight and 10 TBTE drives, but even like we had the 3 TB WD green. There's a 3 TB WD red in here. And they are actually slightly different sizes. And so it'll always use the

lowest available amount of space. So like the smaller 3 TB drive will waste a little bit of space on the larger 3 TB drive because 3 TB is an approximation. Now there is something on the horizon you should be aware of. Hexos. Yes, Hexos. It's called ZFS any coming and I'm actually building this to test it and that is part of what is going on here with making this thing out of garbage. ZFS Anyrade is a ZFS option for a lot of mixed drives. One of the things that I haven't gotten to testing it yet, that's why we're building it. But in the documentation, it's really designed for RAID Z1. RAID Z 2 and three gives you

two and three drives of redundancy. As we're making this pool out of garbage, I really want something with like three drives worth of redundancy because I anticipate that we will lose more than one drive at a time or more than one drive will decide to be flaky even after we've done our burn-in testing. So, it's we got up to 60 drives in the pool here. ZFS any is being implemented by CLA systems from any raid and that's you know like that's one of the things that Linus Mr. protect tips invested in. And so it's really exciting to see this come together. And so if you've supported

Hexos, you will be able to use that in Hexos. And I hope that Hexos has an ungodly fire breathing drive fitness thing because the Unrade plugin for doing drive fitness testing is quite good. If you just want to use Linux, you just want to boot up Ubuntu and run it, there's a script on level one forums. You can just do that and run tests on your drive and that might probably would give you a pretty good idea of uh how good your drive is working. But it's on you to test and confirm 321 backups, all that kind of stuff. That's enough job boning. Let's get it put together. Oh, one thing that just occurred to me, uh if you do use one of these cards, see they got the big heat

sinks on them. These are designed to go in servers, and servers have a lot of air flow over these cards. If you repurpose a desktop like these, they it does not have a lot of air flow in the card area. You're going to have to strap a fan in the side because this needs active air flow. It will overheat and it will murder everything. Don't let these overheat. They need a lot of active air flow. A lot. And because they're designed for the enterprise, they'll generally sacrifice themselves in pretty short order in order to keep everything running. So, yes, active cooling. All right, let's get this put together.

Let's get all my garbage to your drives in the chassis and see where we stack up with space. And we can do everything else remotely. Literally just something like that with zip ties and you're good. Oh, I almost forgot. You got some garbage tier SATA drives. 128 gig, 256 gig. You can't use them for anything else. Maybe you can collect them from your friends and family. These will work great in that disc shelf as well. So, I'm going to add two or three 256 gig SATA drives. Now, you shouldn't put these as part of the storage pool with everything else. You should definitely create a storage pool

that's only flash and a storage pool that's only mechanical storage because the performance characteristics of this setup is gonna be a little janky. All right, let's get to it. Now, you may be wondering, what's the catch? You got a 60 bay enclosure like this you can make use of your old hard drives and like if you've got this to manage the cabling and the chaos and the mess, what's the downside? There's a couple. One, power usage. If your drives are spinning all the time, they're using a lot of power and power is expensive. it might not make economic sense to keep the drives spun up all the time.

Fortunately, this chassis can be kind of power efficient if the drives are not spun up. You can use software to turn the drives off when they're not being used. So, when a drive is not actively receiving a backup or sending a backup to somebody, the drive can be off and that'll save you a significant amount of power. If you're a media server, okay, it's maybe a different story. If somebody's actually using the media server, then all of the drives that comprise the media pool pretty much need to be on. There is some software stuff you can do. It's like somebody's streaming a movie. Let me just go ahead and read the entire movie into memory and then the hard drives can go to sleep

after like 5 minutes. If that's, you know, if that's doesn't turn into a cache miss, that's an option. But generally, uh, the power usage can be a little bit of a problem. There's another downside with this particular chassis, and that is that it requires 220 volts. It will not run off of 110. If you're putting this in a basement or near a power panel, it's not really a big deal to put in a 220 volt power outlet. uh old air conditioner outlets, like if you had window unit air conditioners, those are 220. And so like you can get a power cord for that, hook it up to 220, but this chassis will not power on a 220. And that is why they cost uh you know $250 to $300 before shipping um give or

take. And so that is a major downside. But you can also get like a 20 bay or 20 24 bay or a smaller version of this chassis. you know, the NetApp dish shelves. I, you know, I turned Linus on to NetApp dish shelves like 10 years ago, and those are a little long in the tooth, like meaning that they're old, like 10 year old disc shelves, and it's not great from a reliability and some other stand cuz like those power supplies do age. Um, but it's an option. If like 24 or 48 drives will get the job done, then it's definitely an option here. Okay, I'm back at my desk. Everything's put together, so we can do the boring parts. Uh, the system can see the drives. You can run lsbk. You can

see the guide, the buddy backup guide on the level one forums. But just to give you an idea of what this looks like, this is the smart ctl output of, you know, for one of the random drives. We've got drives from 3 terabytes to 10 terabytes. And it's just a random hodge podge mix of 3 terabytes, 5 6 terabytes, 8 terabytes, 10 terabytes. That's pretty good. It's pretty good to have 60 drives of a random capacity like this. But this is a lot of space in aggregate as you'll see. But we got to run tests on all these drives because a surprisingly large number of these drives are going to have issues. Here's our 10 terabyte hard drive. Now we've

ran a smart test and smart support is enabled. It says, "Oh yeah, it says overall result has passed." But we come and we look at the smart results and it's like this doesn't really look super amazing. Like this looks like this drive might be struggling. It completed but with a read failure for the extended offline test. Maybe we do some other testing. This could be a drive that is a candidate for that. If we do an extended offline test or we run other utilities, the drive may be able to repair itself and then it may be trustworthy for data. It may not be trustworthy for data. You don't know. But there's a utility on GitHub called BHT. And how to use that is detailed in the guide on the level

one text forum. And you can test a bunch of drives in parallel. You don't have to wait for one. So you could theoretically test all 60 of these drives in something like a week, give or take. So I ran BHT on a selection of about 12 drives to start. And it is going to try to run bad blocks on all of those discs to see what they do. I would also recommend that you run smart ctl and let it do a smart ctl offline extended test with the command in the how-to and then look at the smart ctl output with smart ctl- and look at your drives and see how long they have been on. Look at this drive. This drive has been on for 79,000 hours. That's about 10 years of power on time. We are well beyond the useful lifetime of this drive. 10 years of

power on time. But it tests fine. And if we have enough drives of redundancy baked in here, like if we're building a RAID Z3, then okay. So where did we end up capacity-wise? Out of our 60 drives, 10 were not usable. We had 20 3 TBTE drives that had an average power on lifetime of 7 years. We had uh 16 4 TBTE drives with an average power on time of 6 years. And we had six 10 TB drives, four 8 TB drives, and four 6 TB drives. Let's just call that about 230 terb of raw capacity. I need this to be a storage volume. And we talked about hexos earlier in any rate where I can just mix and match a whole bunch of drives. That's not ready for the prime

time yet, but that is also open source. So you don't have to have hexos to use that. XOS is probably going to be the first support for it. It's going to be the first class support and you should probably use it first in Hexos. But I being a rando, I can use the patch set from GitHub. And in order to get it working now, you have to use an outof tree like custom patch set. Like you don't really I'm kicking the tires a little early in other words to test it out and see how it goes. And any raid can work pretty well with this setup. Notice that our uh smallest configurations are our two groups of four drives. We got four 8 TB drives and four six terb drives. And this is not really a lot of fun because you know if

you have one drive of redundancy in that group that's RAID Z1 and it's like okay maybe if one of those drives died and I have done the prefitness test here but we also lost 10 of our 60 hard drives. So I only have 50 hard drives that I'm willing to use even with some reported smart errors. At the same time, this is all over 200 terabytes of capacity, which is a lot, but the electricity cost is also a lot. Since I've got 20 3 TB drives and 16 4 TBTE drives, I'm just going to create RAID Z3 Vdeevs with those. Okay, so I'm not going to use any RAID for this. But since my drives are close enough, like this is an odd hodgepodge of 3 TB drives. Those are a mix of like WD Red and WD Green and like

Seagate, like different brands, and the performance characteristics are different. I'm going to do RAID Z3 meaning that I have three drives of redundancy. 17 drives for data. 17 drives capac worth of capacity for data. Three drives worth of capacity for redundancy. Um, and 16 4 TBTE drives. Same deal. I could probably go RAID Z2. So, I'm going to do something a little unusual and mix my RAID Z levels inside one pool. This is a little dangerous and I'll explain why. So, we've got 23 terabyte drives. That's going to be RAID Z3. I've got 16 4 TBTE drives. That's going to be RAID Z2. Then I've got my four eight terbyte drives and my four six terabyte drives. Those are going to be RAID Z1. So I could lose one drive

out of the two smaller constituent components or two drives if there happen to be a 4 TBTE drive or three drives if they happen to be a three TBTE drive. If I lose more than one of my six terabyte drives, I lose the entire pool. If I lose more than one of my 8 terabyte drives, I lose the entire pool. The 10 terabyte drive one was a tough decision because it's like RAID Z2 versus RAID Z1. Ultimately, I went with RAID Z2. So, up to two of my 10 TB hard drives can fail before I lose the pool. And this is just ZFS stuff. Now, when you create a pool like this, ZFS will complain that the level of redundancy is different.

What that means is it's telling you that you're creating a pool from all these constituent components that have different RAID Z levels, meaning they have different levels of fault tolerance. We have more fault tolerance over there, three and four TB drives than we do with our larger capacity drives. And the hilarious in the real world reality of this is that the higher capacity drives are perhaps more unreliable than their older counterparts as evidenced by 3 TB. But it's also a bathtub curve. Like as these drives get old, the failure rate goes up a lot. So these three and four terabyte drives are basically end of life. So RAID Z, that's how I do it. Unrade, you can kind of figure it out with unrade.

Unrade will put it together. Unrade, in my opinion, without ZFS is a little more unreliable. And unrade with ZFS is not going to figure out a pool with this level of complexity. I would not fault you at all for setting up a bunch of different shares and having like one pool composed of three and four terabyte drives and then using the other drives some other way. Like maybe you just like create a small pool of those drives and that's someone else's backup. The other consideration here is that this is a second backup copy. It is not a catastrophic situation if the backup system fails and is offline or it has to be rebuilt and then we have to get a fresh backup unless the backup system

fails while some other system fails. So, I'm a little bit more risk tolerant here, but even with that, I still went with RAID Z3. And again, the promise here from Hexos is to try to hide some of this complexity from you and make it a little easier with the promise of any RAID in RAID Z. But I'll be waiting for any RAID Z3 because I'm gonna want three drives worth of redundancy and it's going to be it's gonna be a little more tricky to work that out. The BHT utility is older, but it's really handy when you run it. It gives you the output of smartct ctl- a. So like understand that's what it's doing under the hood. And it actually gives you the pre and post version of that. It'll also run bad

blocks and tell you what's going on with bad blocks. So bad blocks is kind of slow. There are faster ways to do this. I really am tempted to fork this repository and add some cool features to it, but BHT will get you there for now. Okay, this turned out to be a little bit more of an adventure than I thought. You're going to have to check the thread in the forum. 10 drives out of our 60 drives, completely unusable. I can probably dig up another 10 drives, but that's okay. I've created a ZFS pool, uh, two ZFS pools, actually. Um, there is some more details about that on the forum. Uh, but I've also done some performance testing and I was kind of surprised if the performance is uneven

as you would expect because these are different classes and genres of drives. Like it's just because it's a three terabyte drive doesn't mean that all three TB drives perform the same. And because of that, this is very, you know, I'm deviating from every best practice you possibly can. But again, it's just going to store backups. So my IO latency is just random. It's a random hodgepodge. And my throughput, sequential throughput, a best case scenario is also a random hodge podge. Just scanning throughput, like dealing with a 25 terbte scan, which is going to hit all the discs. You know, we start out pretty good at like 5.5 GB per second, and then we trail off to like threeish, but we can't maintain something that is that even. And that

might sound terrible, but you got to keep in mind this thing is going to be connected to the network at probably not even 10 gigabit. And 10 gigabit is just over one gigabyte per second. So for a backup appliance or something that stores a lot of media, this is basically okay. Now let's talk about power usage. The disc shelf itself is just over 200 watts. And remember, it requires 220. Well, it requires 208 or 240 volts. You can't plug it into 110. But the shelf itself with no discs in it at all is just over 200 watts. When the discs are spun up and doing a bunch of stuff, the power usage was not quite as high as I

expected, but it's still on the order of like 800 watts, give or take. That's like half a space heater. Um, when the drives are idle and spun down and the disc shelf isn't doing anything and the drives are just hanging out there, it's on the order of 320 to 330 watts. I'm using an eaten power meter metering power strip, but it's a data center thing to sort of figure out the power utilization. So, I think when the thing is sitting there idling, not doing anything with a bunch of drives in it, you can count on it using 300 350 watts, something in that neighborhood. And when it's spun up and there's the system is really busy, you know, you're knocking on 7 8 900 watts. And the higher capacity drives uh are

probably going to be a little bit higher utilization. If you have SAS drives instead of SATA, it's going to be a little bit higher utilization. And remember, I've got a few fair number of SATA SSDs in there. I was like, oh yeah, what's the story in the SATA SSDs? The SATA SSDs worked out better than anything else here. The disc shelf, the way that it's it's architected is a little bit of a bottleneck here. And the SATA SSDs can absolutely bottleneck my interface back to the machine. it's not great, but if you're just pooling together a bunch of 256 GB SSDs to have a nice fast storage pool, it'll work. But if you're, you know, if you're in a spot where you've only got like four or six or eight SATA SSDs, it may

be better to use an internal controller for that. So, there you go. This is this is, you know, for a $300 disc shelf, this is amazing. And when you have a whole bunch of drives, the whole uneven drive size thing becomes less of a worry. Be that as it may, I am waited with baited breath for any RAID on Hexos to become more stable and something to keep an eye on. Until then, you know, the combination of multiple ZFS pools with something like merger FS. Merger FS will allow you to just connect together different independent file systems is probably where the rest of us will be at. Or possibly maybe Unrade with its ability to deal with multiple differentiz drives. That is an option. But for any more

in-depth information and some more chitchat, you're going to have to come to the forum. I'm wonderless level one. I'm signing out and you can find me in the level one forum.

More Tech Transcript