Programming4us
         
 
 
Applications Server

SAP Hardware, OS, and Database Sizing

3/29/2012 4:00:17 PM
Common Server Considerations for mySAP Sizing

Server sizing is focused on determining not only how many servers might be required to host a specific number of end users or batch processes, but also on how each server is configured in terms of processors, RAM, and local disks. Server sizing is also impacted by your unique high-availability and disaster recovery requirements, certain HA/DR offerings are only supported by specific hardware, operating system, and database combinations.

Different form factors and types of servers might also best serve your particular needs. For example, many of my customers purchase only rack-mountable servers, whereas others might prefer tower models. In the same way, some prefer dense form factors (slim-line servers that only consume a few inches of vertical space), but others prefer larger server footprints with correspondingly greater capabilities (like room for more disk drives, processors, PCI and other I/O slots, and so on). If you let your potential hardware partners know your preferences up front, you’ll save a lot of time when it finally comes down to identifying part numbers and putting together pricing information.

Another obvious server sizing consideration involves the operating systems supported by the platform. If your OS standard is Linux, for example, you need to ensure that your hardware partners know this, so that they size your mySAP solution with Linux-compatible server platforms.

Finally, scalability of the platform needs to be taken into account. If your SAP system’s database server requires four processors to address your estimated peak workload, you probably do not want to lock yourself into a four-processor platform; a larger eight-CPU-capable system configured for only four CPUs provides you with in-the-box vertical scalability if you later determine you need more processing power. The same philosophy should be applied to RAM, I/O slots, and so on, unless you are either quite certain of your needs or willing to risk being wrong. Beyond scalability of your database platform, you also need to share your sizing philosophy with regard to application servers. Do you prefer in-the-box scalability, or is your application server strategy centered around adding additional fully-configured servers? Like determining your OS and platform preferences, your scalability requirements should have already been fleshed out by this time, and shared with each hardware vendor.

Disk RAID Configurations and Features

The following list identifies different RAID implementations and the configuration options, advantages, and features of each as it relates to sizing:

  • RAID 1, or mirroring. For a desired amount of disk space, you must purchase and install twice as much disk space; 400GB of usable space requires 800GB in raw disk space. Thus RAID 1 configurations can get expensive not only in regard to the number of physical disk drives required, but also in terms of the number of drive shelves and even storage cabinets needed to house the drives. In some cases, the number of array controllers and requisite cabling must be increased as well.

  • RAID 1+0, or 0+1, or 10, or striping/mirroring. Like RAID 1 mirroring, half of the configured drives are consumed to protect the data on the other half. Thus, the same cost structure applies here. The benefit is maximum performance, however; both writes and reads are performed faster in RAID 1+0 configurations than in other implementations.

  • RAID 5, or striping data and parity. This is usually the least expensive method of achieving high availability within a disk subsystem. Depending upon the implementation, a RAID 5 configuration will only lose a fraction of otherwise usable space to parity. A couple of things are very important, however. First of all, RAID 5 implementations will never realize the raw performance achievable by other RAID solutions. Second, to obtain even moderate performance, a minimum number of drives (often referred to as spindles) must be configured. Last of all, a RAID 5 configuration might be limited in terms of the number of drives that can be configured versus the high availability obtained; for a system with six drive shelves, a RAID 5 configuration is at risk (as is any RAID configuration) if two drives are housed in the same shelf. Thus, the value derived from the benefit of losing only a fraction of disk space to parity might be diluted a bit, depending upon the disk subsystem vendor’s gear and how it implements no-single-point-of-failure disk solutions.

  • Other RAID levels, like RAID 3/5 and RAID 4. These are usually best compared to RAID 5, where a certain quantity of raw disk space is lost to maintain parity data. Generally, the amount of required parity data is greater in these implementations than in RAID 5, however (or overall performance is worse), which is one reason why they tend to be less common today.

With the basics behind us, let’s turn our attention to some of the disk subsystems and drives available on the market today, and how they can impact our SAP sizing.

Commonly Deployed Disk Subsystems for SAP

Various disk subsystems are available on the market today, from popular SCSI-based solutions, to those leveraging fibre-channel and other connectivity fabrics. From a sizing perspective, I have found that the most important consideration is scalability, especially in terms of in-place upgrades. Disk subsystems are expensive, and replacing your system with a new one is even more expensive. Money spent up front to address not only scalability but also the ability to upgrade drives, controllers, and so on can make incremental upgrades easier, less expensive, and helpful in extending the life of your solution. With this in mind, take care to buy a disk subsystem that is not at the end of its useful life cycle. And be sure to invest in a platform that has a historical and well-documented upgrade path—a system that allows you to replace all of your disk drives with larger and faster drives is a good example. Similarly, a disk subsystem that supports upgrades to its array controllers and cache makes financial sense, too.

Like disk subsystems, a variety of disk drives are available today, ranging in capacity, form factor, and speed. Consider the following:

  • Drives that operate at 10,000RPM can boost I/O performance between 5–30% more than their 7,200RPM counterparts, depending on their application in a computing environment. Typically, write-intensive volumes such as database transaction logs, temporary/sort areas, and very active, or hot, tables and indexes, benefit the most.

  • Further, 15,000RPM hard disk drives access data 26% faster than their 10,000RPM counterparts. These 15K drives deliver a theoretical 50% improvement in Input/Output Operations Per Second (IOPS) over 10K drives, and 108% over 7200RPM drives (though in reality the realized performance improvement is much smaller). Servers with multiple drives handling large numbers of transactions benefit most from this increased I/O performance.

  • The largest drives on the market today can actually hold an entire SAP database, though actually doing so is never recommended. Why? Because individual disk drives can still move only five to seven sustained MB per second factoring in all latencies. Thus, as always, it’s necessary to configure multiple drives rather than fewer.

  • To save money, I typically recommend implementing something smaller than the largest drives available, though in today’s economy, the cost difference between different sizes continues to decrease. A good example today is the use of 18GB drives in database design; with 36 and in some cases 72GB drives costing nearly the same, it simply makes no sense to invest in smaller drives. A year from now, the same argument will be made for 36GB drives, too, as 72 and 145GB drives continue to become more mainstream and subsequently fall in price. If you focus on spindle count rather than pure capacity, you’ll be in good shape when it comes to performance.

  • Disk drive form factors impact sizing in that the largest and newest drives tend to be physically larger than their more mature counterparts. So, to take advantage of the largest drives may require sacrificing in terms of the number of drives that can be physically installed in a particular disk subsystem. My general recommendation is to ensure that your disk subsystem can handle different drive sizes (for future upgrades or other needs), and then size for the number of spindles needed (to meet your performance goals) rather than installing the largest drives. And if a particular drive size (like 36GB) happens to be available in two form factors, push for the smaller form factor to make more empty drive bays available later when you need more disk space.

One other general area is important, too, that of using disk-based storage to house online copies of your database. One client of mine has effectively quadrupled the amount of disk space required to house their database volumes simply because they maintain a number of “aged” offline copies on disk, too. Another client eats up disk space in a similar method, by maintaining a series of client copies. And many more refresh their supporting landscape systems with copies of the production database on a regular basis, consuming disk space throughout the system landscape. Don’t forget to factor in these disk copies, if that is part of your DR, client, or other strategy.

Storage Virtualization—The Latest Paradigm in Enterprise Computing for SAP

The use of disk subsystems that support Storage Virtualization, or SV, is growing. SV allows a greater number of spindles to be configured than competing technologies, and in doing so creates a more flexible foundation to address both growth and changing subsystem needs. Storage virtualization improves storage utilization by minimizing wasted space associated with configuring lots of RAID 1 volumes, and by reducing the disk space required to maintain parity data in RAID 5 implementations. It also supports dynamic storage allocation and reallocation, which again minimizes wasted space by letting you “move” or allocate storage from static drive volumes to those volumes that are growing. Beyond typical storage subsystems, design considerations for a virtual storage array include the following:

  • There is a need to design and size a new SAN abstraction layer, referred to as a group by a number of virtual storage vendors in the market today. This term refers to the collection of physical disk drives that are pooled together at a hardware level, on top of which storage LUNs are created. It is the LUNs and not the groups that are assigned RAID levels, like 1+0 or 5. The groups merely provide a way of providing a specific number of underlying disk spindles to a database or other set of files, or segregating different types of I/O from one another. Especially in the latter case, it’s common to segregate OLTP transaction loads (short discrete transactions typical of R/3, CRM, and SRM products) from OLAP or reporting data loads (like the transactions and queries inherent to BW and SEM).

  • The argument for RAID 1+0 versus RAID 5 is beginning to fall apart, as the number of drives available to a virtual RAID 5 LUN has accelerated throughput and I/O performance levels to those nearly equaling that achieved by RAID 1+0 configurations. This is dependent upon a number of factors, however, including the hardware vendor’s implementation of both SV and RAID 5.

  • Given the striping of data that automatically occurs across a group of drives, creating simple RAID 1 mirror pairs is no longer needed. The minimum number of disk drives required to create a virtual array is six or eight, so the LUN created on top of these drives benefits from both the performance and availability inherent to multiple spindles. The real challenge then becomes laying out disk partitions, such that the performance requirements that need to be achieved by virtue of the role that the partition plays can indeed be achieved without losing a lot of disk capacity. For example, no one wants to place a 20GB SQL Transaction Log file on a RAID 0+1 (sometimes called vRAID 1) virtual array stripe of 36GB drives—the 108GB of usable space would be virtually wasted. So to better use the space, multiple OS partitions might be placed on it, like SQL Server’s TempDB, SQL Server executables, and so on.

  • Similarly, because LUNs are no longer locked in by physical disk size boundaries, more efficient use of expensive disk resources can be realized. For example, in a traditional SAN configured today, it would not be uncommon to consume a pair of 18 or 36GB drives to house something as small as an SAP executables volume, or perhaps a MSCS quorum disk; most of this space would simply sit idly by, never to be used. And it could not be effectively shared with other OS partitions because of the 5–7MB throughput limitations per spindle discussed previously. In a virtual storage solution, though, storage can be easily customized to provide only what is needed—the spindle counts underneath the LUNs allow this flexibility. The resulting savings in disk space can therefore be absolutely huge for a typical mySAP implementation of three or four components, each with four-system landscapes.

  • Virtual storage arrays introduce new high-speed SAN fabrics (2 Gigabit and faster), which need to be taken into consideration when adding a virtual storage array to an existing SAN; they interoperate, but at the slower traditional SAN speeds.

  • Spare capacity (the SV version of “hot spare drives”) becomes more critical as data is spread across more and more drives. Why? Because the Mean Time Between Failure (MTBF) of a large group of drives makes it more likely that a drive failure will occur within one of your LUNs in the next two or three years. MTBF is described as an absolute value, of course, but in my experience it’s more like an “average.” For instance, if the MTBF of a particular drive is one million hours, then half of this group will succumb to a failure before it ever reaches one million hours of service time. In my eyes, the effective MTBF for a group of 50 drives therefore drops to only 1/25th of the rated number, or 40,000 hours in this case. Thus, the likelihood of suffering from drive failures sooner than later is that much greater, and the need for spare drives becomes that much more imperative.

Keeping the previously noted design considerations in mind, sizing a virtual array is quite similar to sizing any other disk subsystem for SAP, with the following exceptions:

  • The highest levels of disk performance are achievable, though at a price. Thus, a disk-subsystem-focused delta TCO analysis may be required in cases where the need for maximum disk throughput is not the primary sizing consideration.

  • The greatest scalability is achievable too, again with the caveat that the cost per GB may be higher than in less scalable systems. Interestingly, though, as a virtual array grows, its storage utilization actually improves, paying off big dividends in terms of lower and lower costs per GB.

  • Virtual storage arrays are still relatively immature and untested in the area of SAP production environments; I have only seen four implemented to date. So for the most risk-averse SAP implementations, this lack of maturity constitutes a sizing factor.

Other factors can come into play, too. The fact that the VA disk controllers are two to six times faster than their predecessors and fibre-channel disks are faster than their SCSI counterparts is an obvious factor. The cost of storage administration, serviceability, multivendor hardware and operating system support, and other positive traits will push the adoption of storage virtualization across the enterprise, too. Similarly, difficulty in accessing virtual storage specialists, and the paradigm change in managing data will hinder this adoption. But I believe the most compelling reason we will see virtual storage take off in mySAP environments is simple—awesome performance. Review the “Virtual Arrays vs the Competition” PowerPoint on the Planning CD for compelling data illustrating my own findings and observations.

Operating System Factors

The capabilities of one operating system over another often come into play when sizing an SAP solution. At the most basic level, sizing a solution based on a preferred OS is quite common—I’ve often been told by my customers and prospects that a particular OS is their standard, and therefore needs to be taken into consideration. Beyond the general OS, though, lie differences in different versions of the same basic OS, be it Windows, Linux, S/390, or any number of Unix flavors. The following other factors can influence an SAP sizing:

  • Support for new mySAP products is first provided on Windows-based (and to a lesser extent, Linux-based) platforms; other OS platform support is provided as the product matures.

  • Memory, processing power, or other hardware-related factors may dictate a particular operating system. A classic example is the fact that you have to step up to Windows 2000 Advanced Server to really take advantage of more than 2GB of RAM, regardless of how much physical RAM might actually be housed in a server.

  • HA and DR considerations will drive which OS you select for your SAP system. Even if you are clearly in one OS vendor’s camp, your need for a certain level of high availability may force you to purchase a different version of an OS than you otherwise would. Case in point—Microsoft’s Windows 2000 platform again, where server clustering is only supported on the Advanced Server and Data Center Server editions of the OS.

  • Obtaining OS expertise may be a factor—as with any solution component within the stack, access to experienced and reasonably priced technical support staff can sway your decision to go with a particular operating system. For example, although Windows Server 2003 (WS2003) was recently released, it will be some time before we see it underpinning productive SAP installations; it simply takes time for a new product to mature, reflecting both people and product development learning curves.

Database Selection and Sizing Factors for SAP

Selecting a particular SAP-supported database often comes down to two things—taking advantage of in-house experience with a particular RDBMS, and seeking to reduce the cost of acquiring or renewing database licenses for your SAP implementation. One of the biggest total cost of ownership factors for an SAP solution revolves around the database component. Other sizing-related factors include

  • The OS and database (DB) are often tied together; SQL Server only runs on a few select Microsoft OS platforms, for example.

  • High Availability and Disaster Recovery options are often tied to a particular database.

  • Some database releases are inherently faster or more capable of performing a particular type of operation than other database releases.

  • Database acquisition pricing varies widely, from free (SAPDB, for example) to quite expensive, and everything in between.

  • The widespread adoption of certain database technologies with regard to mySAP implementations has varied as well, though I continue to see Oracle’s and Microsoft’s (and to a lesser extent, IBM’s) database products dominate this market.

Regardless of the RDBMS deployed, understanding how quickly your database will grow is the biggest factor of all. Typically, I try to understand my client’s three-year plan in this regard and size the disk subsystem with room to spare.

Another database consideration involves whether you plan to employ SAP’s MCOD feature. MCOD (Multiple Components, One Database) allows you to install several mySAP components on one physical database. For example, you can combine a couple of R/3 4.6C instances, mySAP CRM, and SAP Workplace all in one database. One sizing key is to combine only components that are similar in nature—though other combinations are supported in some cases, OLAP components should only coexist with other OLAP components (like SEM with BW), and OLTP components should only be paired with other OLTP components. This is because of the nature of these disparate systems; good performance would be more difficult to achieve otherwise.

The key sizing factor is the database server, though. A conservative approach to sizing includes sizing the individual components separately in terms of disk space, CPU processing power, and RAM required. To calculate total disk space required, simply add up the requirements of each component, and subtract 10% (which according to SAP is the typical disk space savings realized by MCOD deployments). To calculate CPU requirements, be sure that you capture the number of SAPS that characterizes the load of each system, and then add these together. Do the same for memory needs, and present your combined SAPS, disk, and RAM requirements to your hardware vendor so that an appropriate server and disk subsystem platform can be architected.

Finally, SAP AG requires that you combine only production systems with production systems, test systems with other test systems, and so on. Do not mix landscapes, as it is not supported by SAP AG.

Other -----------------
- Preparing for the SAP Sizing Process
- Microsoft Systems Management Server 2003 : Systems Management Server Installer Tools & Creating Installation Scripts
- Installing Systems Management Server Installer
- Microsoft Dynamic GP 2010 : Installing Integration Manager (part 2) - SQL Server maintenance jobs
- Microsoft Dynamic GP 2010 : Installing Integration Manager (part 1) - SQL Server and database settings
- Active Directory Domain Services 2008 : Transfer the RID Master Role
- Active Directory Domain Services 2008 : Transfer the Domain Naming Master Role
- Microsoft Dynamics AX 2009 : Working with Forms - Modifying application version
- Microsoft Dynamics AX 2009 : Working with Forms - Modifying the User setup form
- Microsoft Dynamics AX 2009 : Working with Forms - Adding a Go to the Main Table Form link
 
 
Most View
- Microsoft XNA Game Studio 3.0 : Adding Bread to Your Game (part 1) - Using a Structure to Hold Sprite Information, Using the Gamepad Thumbsticks to Control Movement
- Windows 8 : Applications - Program Shortcuts and Compatibility
- Windows 7: Managing Wireless Network Connections (part 3) - Reordering Wireless Connections
- Windows Server 2008 Server Core : Getting System Configuration Information with the SystemInfo Utility
- Exchange Server 2010 : Manage Database Redundancy (part 3) - Manage Database Availability
- SharePoint 2010 : Recover a Deleted File or List Item
- Windows 7 : Creating a Windows Network - Planning Your Network
- Exchange Server 2010 Mailbox Services Configuration (part 4) - Client Configuration
- Overview of Internet Explorer 8 (part 1) - Defining IE8 Accelerators
- Exchange Server 2010 : Upgrading from and Coexisting with Exchange Server 2003 (part 7) - Coexistence for Management
Top 10
- Implementing Edge Services for an Exchange Server 2007 Environment : Utilizing the Basic Sender and Recipient Connection Filters (part 3) - Configuring Recipient Filtering
- Implementing Edge Services for an Exchange Server 2007 Environment : Utilizing the Basic Sender and Recipient Connection Filters (part 2)
- Implementing Edge Services for an Exchange Server 2007 Environment : Utilizing the Basic Sender and Recipient Connection Filters (part 1)
- Implementing Edge Services for an Exchange Server 2007 Environment : Installing and Configuring the Edge Transport Server Components
- What's New in SharePoint 2013 (part 7) - BCS
- What's New in SharePoint 2013 (part 6) - SEARCH
- What's New in SharePoint 2013 (part 6) - WEB CONTENT MANAGEMENT
- What's New in SharePoint 2013 (part 5) - ENTERPRISE CONTENT MANAGEMENT
- What's New in SharePoint 2013 (part 4) - WORKFLOWS
- What's New in SharePoint 2013 (part 3) - REMOTE EVENTS