RAID6 would be the obvious next step. It's depends on now critical the system is and the discussion isn't relevant for many of the most critical systems - they are connected to a SAN instead.
We live with the possibility of hardware failure in our cameras too - flash cards fails regularly too, which is probably why it's possible to create a mirror with two cards in many cameras.
Well, SANs consist of (raid) disk sets in the background, and the disks share a quite consistent dying rate with what we discuss here.
SANS are abstractions, they just help to operate disk space more efficiently and independently from servers.
And they often have supportive measures build in to deal with the disk imperfections.
For our purpose:
There are quality differences between drives: desktop SATA worst, server SAS/SCSI best. Server SATA better than average. You get what you pay for. Think in price factors.
Always add hot spares. Always.
Cold spares (sometimes called stand-by disks) only make sense when they are not sharing the same mechanical vibration domain of the main cage or case.
Read performance: Raid 5,6, and 10 deliver about the same read performance.
Write performance: Raid 10: 50% write performance, Raid 5: 25%, Raid 6: 17%.
Space effficiency (minimum 4 disks): Raid 10 has only 50% constantly, Raid 6 at 50%, Raid 5 at 75%.
Space effiiciency ( 8 disks): Raid 10 has only 50% constandly, Raid 6 at 70%, Raid 5 at 85%.
Reliability (4 disks): Raid 6 is 450 times raid 10 and raid 10 is 10 times Raid 5
Due to the propability of a disk having bit errors it is good advice to keep arrays of very cheap disks, say SATA/P-ATA desktop disks, small. 10 Gbytes (4 disks) or so.
Size and time make up for failure.
Keep copies. Use different media. Encrypt and load to the cloud.
Did I talk about power supplies?