Showing posts with label Intel SSDs. Show all posts
Showing posts with label Intel SSDs. Show all posts

Tuesday, 9 April 2019

IMPORTANT: CRITICAL Firmware Update for Intel SSD DC S4510 and S4610

The Intel SSD DC S4510 and S4610 series SATA SSDs have a critical flaw where after 1,700 hours of idle time, meaning on but not working, they brick.

The following is from the Release Notes for the update:

Intel® Solid State Drive DC S4510 and S4610 Series Revision History

History

Date

Firmware

March 2019

XC311102(MR1)

XCV10110(MR1)

The following changes are included in this firmware update:

Resolved issue related to intermittent drive drop during initial boot.

Resolved 1.92TBand 3.84TB SKUs may become unresponsive at 1700hrs. of cumulative Idle Power On Hours.

Intel direct download can be found here: Intel® SSD Data Center Tool (Intel® SSD DCT)

For OEM provided applications check with the vendor's support site to find out if there is an update available.

This is a _critical data loss scenario_ and should be dealt with ASAP!

Philip Elder
Microsoft High Availability MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book
www.s2d.rocks !
Our Web Site
Our Cloud Service

Tuesday, 15 January 2019

Custom Intel X299 Workstation: Intel VROC RAID 1 NVMe WinSat Disk Score

We just finished a custom build for a client of ours in the US.

The machine is extremely fast but quiet.

image

After kicking the tires a bit with Windows 10 Pro 64-bit and some software installs post burn-in we get the following performance out of the Intel NVMe RAID 1 pair:

C:\Temp>winsat disk
Windows System Assessment Tool
> Running: Feature Enumeration ''
> Run Time 00:00:00.00
> Running: Storage Assessment '-ran -read -n 0'
> Run Time 00:00:00.77
> Running: Storage Assessment '-seq -read -n 0'
> Run Time 00:00:02.38
> Running: Storage Assessment '-seq -write -drive C:'
> Run Time 00:00:01.64
> Running: Storage Assessment '-flush -drive C: -seq'
> Run Time 00:00:00.45
> Running: Storage Assessment '-flush -drive C: -ran'
> Run Time 00:00:00.38
> Dshow Video Encode Time                      0.00000 s
> Dshow Video Decode Time                      0.00000 s
> Media Foundation Decode Time                 0.00000 s
> Disk  Random 16.0 Read                       1020.73 MB/s          8.8
> Disk  Sequential 64.0 Read                   3203.52 MB/s          9.3
> Disk  Sequential 64.0 Write                  1456.24 MB/s          8.8
> Average Read Time with Sequential Writes     0.090 ms          8.8
> Latency: 95th Percentile                     0.146 ms          8.9
> Latency: Maximum                             0.316 ms          8.9
> Average Read Time with Random Writes         0.058 ms          8.9
> Total Run Time 00:00:05.91

The machine is destined for a surveying company that's getting into high end image and video work with drones.

All in all, we are very happy with the build and we're sure they will be too!

Philip Elder
Microsoft High Availability MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book
www.s2d.rocks !
Our Web Site
Our Cloud Service

Friday, 11 January 2019

Some Thoughts on the S2D Cache and the Upcoming Intel Optane DC Persistent Memory

Intel has a very thorough article that explains what happens when the workload data volume on a Storage Spaces Direct (S2D) Hyper-Converged Infrastructure (HCI) cluster starts to "spill over" to the capacity drives in a NVMe/SSD Cache with HDD Capacity for storage.

Essentially, any workload data that needs to be shuffled over to the hard disk layer will suffer a performance hit and suffer it big time.

In a setup where we would have either NVMe PCIe Add-in Cards (AiCs) or U.2 2.5" drives for cache and SATA SSDs for capacity the performance hit would not be as drastic but it would still be felt depending on workload IOPS demands.

So, what do we do to make sure we don't shortchange ourselves on the cache?

We baseline our intended workloads using Performance Monitor (PerfMon).

Here is a previous post that has an outline of what we do along with links to quite a few other posts we've done on the topic: Hyper-V Virtualization 101: Hardware and Performance

We always try to have the right amount of cache in place for the workloads of today but also with the workloads of tomorrow across the solution's lifetime.

S2D Cache Tip

TIP: When looking to set up a S2D cluster we suggest running with a higher count smaller volume cache drive set versus just two larger capacity drives.

Why?

For one, we get a lot more bandwidth/performance out of three or four cache devices versus two.

Secondly, in a 24 drive 2U chassis if we start off with four cache devices and lose one we still maintain a decent ratio of cache to capacity (1:6 with four versus 1:8 with three).

Here are some starting points based on a 2U S2D node setup we would look at putting into production.

  • Example 1 - NVMe Cache and HDD Capacity
    • 4x 400GB NVMe PCIe AiC
    • 12x xTB HDD (some 2U platforms can do 16 3.5" drives)
  • Example 2 - SATA SSD Cache and Capacity
    • 4x 960GB Read/Write Endurance SATA SSD (Intel SSD D3-4610 as of this writing)
    • 20x 960GB Light Endurance SATA SSD (Intel SSD D3-4510 as of this writing)
  • Example 3 - Intel Optane AiC Cache and SATA SSD Capacity
    • 4x 375GB Intel Optane P4800X AiC
    • 24x 960GB Light Endurance SATA SSD (Intel SSD D3-4510 as of this writing)

One thing to keep in mind when it comes to a 2U server with 12 front facing 3.5" drives along with four or more internally mounted 3.5" drives is their heat and available PCIe slots. Plus, the additional drives could also place a constraint on the processors that are able to be installed also due to thermal restrictions.

Intel Optane DC Persistent Memory

We are gearing up for a lab refresh when Intel releases the "R" code Intel Server Systems R2xxxWF series platforms hopefully sometime this year.

That's the platform Microsoft set an IOPS record with set up with S2D and Intel Optane DC persistent memory:

We have yet to see to see any type of compatibility matrix as far as the how/what/where Optane DC can be set up but one should be happening soon!

It should be noted that they will probably be frightfully expensive with the value seen in online transaction setups where every microsecond counts.

TIP: Excellent NVMe PCIe AiC for lab setups that are Power Loss Protected: Intel SSD 750 Series

image

Intel SSD 750 Series Power Loss Protection: YES

These SSDs can be found on most auction sites with some being new and most being used. Always ask for an Intel SSD Toolbox snip of the drive's wear indicators to make sure there is enough life left in the unit for the thrashing it would get in a S2D lab! :D

Acronym Refresher

Yeah, gotta love 'em! Being dyslexic has its challenges with them too. ;)

  • IOPS: Inputs Outputs per Second
  • AiC: Add-in Card
  • PCIe: Peripheral Component Interconnect Express
  • NVMe: Non-Volatile Memory Express
  • SSD: Solid-State Drive
  • HDD: Hard Disk Drive
  • SATA: Serial ATA
  • Intel DC: Data Centre (US: Center)

Thanks for reading!

Philip Elder
Microsoft High Availability MVP
MPECS Inc.
Co-Author: SBS 2008 Blueprint Book
www.s2d.rocks !
Our Web Site
Our Cloud Service

Friday, 7 June 2013

Windows Server 2012 to RST RAID 0 Error: Windows can’t be installed on drive 0 partition 1 - and Others

We had the following errors when trying to install Windows Server 2012 onto a desktop setup:

image

Windows Setup

Windows cannot be installed to this disk. This computer’s hardware may not support booting to this disk. Ensure that the disk’s controller is enabled in the computer’s BIOS menu.

And then, after fiddling about with DiskPart we managed:

image

Windows Setup

We couldn’t install Windows in the location you chose. Please check your media drive. Here’s more info about what happened; 0x80300001.

Windows cannot be installed to this disk. This computer’s hardware may not support booting to this disk. Ensure that the disk’s controller is enabled in the computer’s BIOS menu.

The PC setup

  • Intel DX79SR with Core i7 and 64GB RAM
  • Intel RST RAID enabled and 6x 160GB Intel SSDs in RAID 0
  • Primary SATA set to RAID
  • Secondary SATA DISABLED in BIOS
  • Boot order set correctly

Now, it is important to note that these SSDs have been in and out of various systems.

So, a last ditch effort:

  1. Log into the RAID BIOS
  2. Reset the disks to NON-RAID
  3. Boot to WinPE
  4. SHFT+F10
  5. DiskPart
  6. Select each SSD and CLEAN
  7. Reboot
  8. Log into RAID BIOS
  9. Set up RAID 0
  10. Boot to Windows Server 2012 Setup

We created our 120GB partition and did not see a single message.

So, rule of thumb: Clean each disk while in JBOD/Standalone mode with DiskPart before configuring the disks in a new host-based RAID setup (chipset RAID).

A hardware RAID setup would initialize the disks (use the deep/long initialization to run a full set of zeros across all platters/SSDs if problems happen) so we would not normally see this problem.

Philip Elder
MPECS Inc.
Microsoft Small Business Specialists
Co-Author: SBS 2008 Blueprint Book

Chef de partie in the SMBKitchen
Find out more at
www.thirdtier.net/enterprise-solutions-for-small-business/

Windows Live Writer