Review: Apple’s new 5K iMac — powerful, pixel-ful and pricey

There are 14.7 million reasons to want Apple’s latest iMac — the 14.7 million pixels that make up its stunning 27-in. 5K Retina display. At $2,499, the new iMac isn’t cheap, but its screen makes this desktop a great value — if you can afford it.

Though the tech behind iMac displays has changed over time, the one constant on the 27-in. iMac has been the resolution: 2560 x 1440. Even as pixel densities have increased on other Apple devices — starting with the iPhone 4 and then the iPad and MacBook Pro — a Retina-class display on a desktop Mac was elusive. No longer — on this new iMac, the LED display features a mind-boggling resolution of 5120 x 2880 pixels. And despite the display’s capabilities, Apple claims it uses 30% less power than previous models.

In order to make this high-end screen work, Apple had to introduce some innovative technologies. For example, to create a Retina display this size, the Oxide TFT technology — which gives each individual pixel its charge — had to be more precise, offer a quicker charge per pixel and deliver a consistent brightness across the entire 14.7 million-pixel array. In addition, Apple said had to create its own timing controller — called a TCON — to handle the high number of pixels.

Tech details aside, this display has to be seen to be appreciated. Fonts render sharp and crisp, regardless of size, while high-quality images and videos pop. But the 5K resolution has one unfortunate side effect: Low-resolution media looks pixelated when expanded to normal viewing sizes.

A refined design and impressive specs

I’ve always been a fan of the iMac lineup, from the ease of setup to the designs. I thought the original bubble design was cute, loved the swiveling Luxo-lamp iMac G4 and thought the G5’s plastic white look¬†was adequate. Apple shifted to aluminum and glass materials in 2007, and has since refined the overall style, shrinking the unit’s weight and thickness.

The latest iMac is still housed in an aluminum frame. The glass display is bordered by a black frame that hides the iSight camera. Below the black border is a wide band of aluminum that bears, front and center, the Apple logo. If you’re a fan of well-built hardware like I am, what you will notice is a seemingly unbroken frame — joined by a process called friction welding — and the tight tolerances between the glass and aluminum materials. The iMac is a thing of beauty.

And it performs well, too. The $2,499 Retina iMac runs on the Haswell chipset, powered by a quad-core 3.5GHz Intel Core i5 (with Turbo Boost capabilities that can push processing speeds up to 3.9GHz as needed), 8GB of 1600MHz DDR3 memory, a 1TB Fusion Drive for storage, and graphics powered by the AMD Radeon R9 M290x with 2GB of GDDR5 memory.

You can upgrade the processor to a quad-core 4GHz Intel Core i7 (with a Turbo Boost of 4.4GHz) for an additional $250; memory can be upgraded to 16GB for an additional $200 or to 32GB for $600 more. This iMac includes a port that allows access to four SO-DIMM slots. If you can swing it, I would recommend going with the upgrade to the Core i7, since you can always add memory down the line.

During the time I spent with the iMac, performance was pretty consistent, and the onscreen graphics hardly ever dropped any frames when playing videos or working through operating system animations. But if you think you’re going to want more performance, you should opt for the AMD Radeon R9 M295x 4GB GDDR5 graphics card. That upgrade will cost you $250 more.

Storage choices

The iMac comes standard with what Apple marketing calls a Fusion Drive — a mechanical hard drive combined with 128GB of flash storage. Managed by background software, the Fusion Drive automatically figures out what files you use most often and puts them on the faster flash drive for quicker access. (The operating system and frequently used applications are also stored on the flash drive.) Larger data files — or files that are infrequently accessed — reside on the slower mechanical drive. The difference is pretty noticeable if you’re constantly working on large files.

If you want more storage, you can configure the iMac with a 3TB Fusion Drive for an additional $150.You can also can swap the 1TB Fusion drive for a 256GB flash storage drive for no change in price — trading storage space for consistent speed. You can also purchase a 512GB SSD for $300 more or a 1TB SSD for $800 more.

You can also add an external SuperDrive for an additional $79 if you intend to view or create CDs and DVDs, and you can swap out the Magic Mouse for a Magic Trackpad, which supports gesture and touch input. (I highly recommend the Magic Trackpad, by the way; if you’re already using an iPad or iPhone and are accustomed to gestures, the trackpad is the choice — even if you have been wronged by trackpads in the past. This Apple trackpad is different; trust me.)

In the rear, the iMac has a headphone port, SDXC card slot, four USB 3.0 ports, two Thunderbolt 2 ports, and a Gigabit Ethernet port. As for wireless, the iMac supports 802.11ac, and is a/b/g/n compatible; there’s also support for Bluetooth 4.0.

The speakers sound good, considering the sound comes from the bottom of the iMac. You won’t be throwing any high-powered raves, but it’s more than sufficient for filling the immediate area.

Keep your cool

One thing that was noticeable was how quiet this iMac runs. Even with the processors working full blast for extended periods of time, the fans were never louder than ambient room noise.

During heavy processing, however, the iMac’s internal fan will vent hot air through the grill hidden by the stand, so make sure that is never covered. The rest of the iMac remained cool and even cold to the touch towards the edges, but warmed as I neared the center and vent.

My only real caveat with this iMac — which in my opinion is probably the best all-around Mac that Apple has ever shipped — is that it’s self-contained. It’s likely that the 5K display will outlive the usefulness of the now-current Haswell-class Intel chipset and AMD GPU. But this iMac doesn’t support Target Display Mode — a feature built into previous iMacs that allows you to connect another computer to use the iMac’s display, as you would any other monitor. This is something potential buyers should be aware of.

Bottom line

This iMac represents the best all-in-one computer for novices who want to be up and running out of the box immediately, while at the same time packing enough horsepower to make high-end users warrant a look — especially if your job requires that you stare at a display all day. The price is high, but, given the technology, you may find it an excellent investment.

Advertisements

Decisions, decisions: Choices abound as data center architecture options expand

When the American Red Cross talks about mission-critical systems, it’s referring to the blood supply that helps save lives. The non-profit organization manages 40% of the U.S.’s blood supply, so stability, reliability and tight security are of paramount concern, says DeWayne Bell, vice president of IT infrastructure and engineering.

With some 25,000 employees and volunteers at about 500 locations around the country, the Red Cross used to manage two data centers, but “we found out that more and more of the enterprise systems we deployed needed higher availability, 24/7 support, and redundancy and resiliency,” says Bell.

“We didn’t have the capability any more to manage the infrastructure,” Bell says, and yet the organization did not feel comfortable moving to a public cloud environment because of security concerns.

Seeking a “more efficient data center,” the Red Cross outsourced its data center operations to CSC in 2007 and moved to Unisys in 2012 because of lower costs, according to Bell. Because the organization didn’t have a large IT staff, he says it didn’t make sense to build an updated, consolidated data center that was highly secure and then try to manage it in-house. The five-year contract with Unisys is valued at over $80 million, and includes on-site services and help desk support, according to the vendor.

For apps that are not as critical to the business, such as email, the Red Cross is using cloud software including Microsoft Office 365. “If email goes down, our business functions won’t stop. But for things that could stop the business, we keep those systems managed and hosted at Unisys,” Bell explains.

The Red Cross is in good company. As organizations look to modernize their IT infrastructure, data center choices are expanding. Cloud, colocation, modular, outsourcing, virtualization and more efficient servers are all vying for your attention, and figuring out the right direction is fraught with challenges, to say the least. And even after you make a decision, you need to revisit it every now and again in light of shifting business needs and new technology choices, as Facebook did with its recent implementation of a more modular network in its Iowa data center.

More companies are opting to move away from traditional data centers with rows and racks of servers because there are a number of issues to contend with in a conventional data center model, including buying your own equipment, figuring out a floor plan, installing it, testing it and maintaining it, experts say. The number of data centers worldwide will peak at 8.6 million in 2017 and then begin to decline slowly, IDC predicts, although the amount of total data center space will continue to grow as mega-data centers replace smaller ones.

A majority of organizations will stop managing their own infrastructure and make use of on-premises, hosted managed services and shared cloud offerings in service providers’ data centers, says Richard Villars, vice president of data center and cloud research at IDC. As a result, there will be a consolidation and retirement of existing internal data centers, while service providers continue to build, remodel and acquire data centers to meet the growing demand for capacity, Villars says.

Additionally, in the last five or six years the physical footprint of servers has been shrinking, especially with the rise of virtualization, and the number of servers that companies traditionally have had to manage is no longer necessary, explains Steven Hill, senior analyst of data center solutions at Current Analysis. “Data centers designed around physically larger environments all of a sudden have all this space available.”

The data center architecture choices available today mean that companies can focus on providing a higher level of service to their end user community, refining processes and gaining efficiencies. But with choice comes the question of how to figure out what’s right for your company. And these days, the question of whether to buy energy-saving servers and other gear is also part of the equation.

“Making decisions comes down, ultimately, to complex issues — at what point do you redesign and start over or continue to work with the space you have and deal with the compression?” asks Hill. With compressed hardware come challenges with cooling and power delivery. Server racks used to average around 5 kilowatts per rack, he notes, and now it’s easy to go to 15 to 20 kilowatts in the same rack space with denser blade servers, for instance.

The rise of cloud computing is also changing the way companies are viewing their data centers because relying on public cloud services can mean less work for the in-house systems. Colocation for disaster recovery and backup is another factor. “A lot of decisions are made on compliance issues or corporate governance that says you need to have a second site,” at least 1,000 miles away, to prevent data loss in the event of a disaster, says Hill.

Besides the obvious security and budget concerns, another factor to consider, as in the case of the Red Cross, is whether you have enough skilled personnel to run a data center in-house. And ongoing management of a data center can be some of the highest costs a company has, Hill says. “The cost to power it, cool it and manage it far exceeds the cost of a server itself over a period of time,” he explains; other costs are for updating server hardware and software.

Location is another consideration. As Hill points out, energy costs are usually going to be more expensive in Manhattan than in rural Iowa, for example, and so will the price of square footage.

All of this is leading more companies to turn to cloud or hosted resources, he says. That alleviates the need for maintaining the data center and doing hardware upgrades. “You don’t have to deal with” capital expenses, Hill says. “The cost of physically managing that environment becomes the responsibility of your provider rather than being an expenditure of your company.”

Dealing with rapid growth

Companies experiencing growth are also finding their data centers can’t always stay as efficient as they need them to be. “IT and business are so tightly linked now that a company can only grow as fast as its IT infrastructure will allow,” says Tony Iams, managing vice president at Gartner. This was a concern for Carfax, a provider of used vehicle history reports. It had two data centers and several vendors providing a variety of services. Meanwhile, the company has experienced an average of 15% to 18% growth annually, and space, power and cooling became problematic as IT condensed server racks, says Chris Thomas, network manager at Carfax.

Carfax opted for colocation and is now renting floor space at data center provider CyrusOne. The company maintains five data centers; two are for internal support and three are for customer support. The benefit is that Carfax can continue to grow and doesn’t have to worry about scaling the heating and cooling and space in the main data centers, Thomas says.

If you build your data center in three different locations to ensure four or five nines of redundancy, he says, “there’s zero customer impact,” says Thomas, because if there is a problem with one data center, the other two “take over and handle the load and manage the traffic.”

The target goal for Carfax’s three colocated data centers is for each to run at 33.3% for equal load balance of servers and storage, with a plus or minus based upon a customer’s location, he says, adding that IT systems will direct a user’s request to the fastest responding data center at that moment.

(Story continues below sidebar.)

The virtual world

When companies want to avoid having to revamp their physical data centers, another option is virtualization. Purdue Pharma, a privately held pharmaceutical company, was experiencing “a significant amount of server sprawl” with its HP systems. The company was also using HP blade chassis, and each chassis was “an island unto itself” with different code levels and complex setups, says CTO Stephen Rayda.

Also, its disaster recovery plan relied on software that was going to be discontinued by the vendor and was protecting only the 20 most critical apps in the company, while the rest were backed up on tape, “and whether we could recover the full system was questionable,” Rayda adds.

If Purdue Pharma expanded its data center, “we would have had to find a way to bring more power into the building,” he explains, and “it would be a significant cost. So we were in a difficult spot, to say the least,” he recalls. Some of the business concerns were the pace at which IT would be able to recover in the event of a disaster and the drive to lower infrastructure costs.

The company considered different options; among them, investing millions of dollars to build a new data center or completely reinvent how it was doing things and eliminate tape backup and server sprawl.

“After doing the business cases and evaluations, it became clear the better path was to reinvent” the company’s two data centers — one at company headquarters and a disaster recovery site at one of its associated plants.

At the same time, Rayda was also concerned about the amount of time it would take to get new IT services to the company’s 1,750 users in the U.S. “If everything is physical, obviously things take longer to procure and provision,” he says.

Perdue Pharma had virtualized some of its servers already in a “cobbled-together solution that was very complex,” Rayda says. IT decided to deploy VCE’s Vblock to create a converged infrastructure and use VMware’s vSphere for virtualization. After implementation, the number of servers running Oracle databases went from 49 down to eight. “The interesting thing is with the reduction to eight servers we were able to increase performance,” he notes.

By virtualizing 95% of its applications, Purdue Pharma has seen a 75% reduction in the data center footprint at a savings of $9 million over five years in capital expenditures and $2.5 million in operational expenses for energy costs. There has also been a 50% reduction in capital IT expenses related to the hardware refresh, he says.

“So the net-net was better performance and higher availability with a fraction of the cost and management,” Rayda says.

Current Analysis’ Hill is a fan of the virtualization approach. “It’s easy to manage that environment because the tools and resources have been time-tested,” he says.

Deciding how to proceed

For companies trying to figure out their next move when it comes to data center considerations, Hill suggests starting by working backwards. You need to have a good understanding of your workloads and where the peaks and valleys are, as well as the ebbs and flows of your production environment, he says.

awaiting servers bugeater flickrFlickr/Bugeater
Server enclosures awaiting their residents.

Once you’ve established your hardware requirements, you have to deal with power and cooling. Enterprises tend to have organically grown data centers with a mix of old and new technologies, Hill says. Among the challenges of building your own data center are the compression of hardware and a workload designed for four racks now fitting into one. “All of a sudden you have to deal with … that one [high-capacity] rack rather than those four lower-capacity racks.”

Another issue occurs when, in times of an economic downturn or hard times for the company, IT typically gets its budget cut and has to lengthen its refresh cycle from a standard three to five years to five to seven years or longer. Then the company is sitting on antiquated hardware.

If you decide to move to the cloud or outsource to a managed services provider, require the vendor to refresh hardware every 36 months, advises Bob Mobach, director of data center solutions at Logicalis.

Companies undergoing significant growth may choose colocation if they find that their existing data centers are facing hard limits in terms of space, power or cooling capacity, either now or in the future, says Gartner’s Iams. “With some colocation services, it may be possible to build assurances into contracts that guarantee the colocation service will be able to provide facilities to handle expected future growth. With such terms in the contract, companies can defer the risk of meeting the needs of future growth to the colocation service, rather than having to accurately project future data center capacity and then make the investments to build that capacity themselves.”

In terms of trying to go green with your architecture, Mobach says that’s something that “should always be on the forefront of everyone’s mind … EPA ratings with Energy Star labels are available for servers as well as data centers and will result in thousands of dollars of savings.” However, green data centersrequire an extra investment and may not be suited to SMBs. “For small data centers, these investments oftentimes are disproportionate and do not result in direct savings,” if the total wattage of the data center is well under 40 kilowatts, he says.

“Going green is not an inexpensive proposition; it’s the right one, but it will definitely require 10 to 30% extra cost overall,” Mobach says.

Pondering your next data center move is a huge financial consideration that, like anything else, requires careful evaluation and planning. If you opt for outsourcing and/or cloud integration, vendor SLAs should be closely studied to determine remediation policy and overall client attention to ensure a similar experience to internal provisioning. For large enterprises, the evaluation of current asset investments, contracts and depreciation schedules should all be taken under consideration.

Ultimately, Hill believes, it is often easier to go with the “as-a-service” model than it is to build a data center yourself. “Now we’re talking about an environment where so many tasks have become generic enough that it’s easy to find resources that match your requirements.”

Three ways enterprise software is changing

Once upon a time, life in the enterprise IT shop was fairly simple, at least conceptually speaking.

IT issued computers and laptops to employees, and maintained enterprise software, databases and servers that supported the company, which were mostly run in-house.

These days, IT’s basic firmament is giving way to a more breathtaking geography that the IT pro must traverse, based on pay-as-you-go cloud computing, building applications and performing deep data analysis. Perhaps more fundamentally, IT operations are moving from merely supporting the business to driving the business itself, which requires agility and making the most of resources.

Here are three of the largest forces at work that will change enterprise software in 2015, and beyond:

The Platform

The idea of cloud computing has been around for a while, so it may be hard to think of it as a new force. Yet, after a few years of testing the cloud for running development projects and tangential applications, enterprises are now moving their more critical operations to the cloud.

IDC expects that by 2017 organizations will spend 53.7 percent of their budgets on cloud computing, and the market for cloud computing software will be over $75 billion.

Concerns about security and overall cost continue to fade as businesses face the upgrade costs of replacing data centers full of servers, or stare down the large up-front costs of implementing a complex in-house enterprise software system.

Travel information provider Lonely Planet is one Web-facing company that has made the jump into cloud services, migrating all of its operations to Amazon hosted services when its data-center lease came to an end.

“With Amazon, we could treat infrastructure as code,” said Darragh Kennedy, head of cloud operations for Lonely Planet. Instead of worrying about how many servers to lease, the company could concentrate on perfecting its service, with Amazon quickly and easily providing however many servers are needed for seamless service.

“Our product owners can stand up a new environment in under 10 minutes, and that really speeds up how quickly we can build new products,” Kennedy said.

Amazon Web Services got a head start in the cloud services space, butMicrosoft is quickly catching up with its Azure service, according to Gartner.

Other enterprise-focused IT companies quickly ramped up their cloud-computing operations this year. IBM and Hewlett-Packard, have each earmarked $1 billion to building out their cloud-computing services.

Complicating the best laid cloud migration plans has been the sudden emergence of Docker, a new, lighterweight, form of virtualization that promises greater portability and faster performance.

Launched in 2013, Docker has been downloaded over 70 million times. The major cloud service providers, including Google, IBM and Microsoft, all spun up their own, and sometimes proprietary, Docker-based services.

While those CIOs who have already started down the path of cloud computing, perhaps by virtualizing some of their operations, may feel frustration at the potential of re-gearing with Docker, it provides one key element that they will need: swiftness. It has been said Docker is the first virtualization technology ready for the DevOps age.

What is DevOps? You should know about that as well.

The Software

A decade ago, COTS (commercial-off-the-shelf) software was the way to go. Why go through the trouble of building your own software from scratch when Oracle, Microsoft and SAP could provide you with all (or at least most) of the capabilities?

If employees grumbled about such software being sometimes difficult to use, well, they were getting paid to use it, right?

These days, however, businesses are finding that enterprise software is no longer in a supporting role, but is central to businesses maintaining a competitive edge. In many cases, this means the organization must build its own software, at least for those parts of the operation that provide the crucial competitive edge for the company.

Remaining competitive is a moving target, of course, as competitors are also busy sharpening their own products and services. Nowhere is this more pronounced than with large Internet-scale services such as Yelp, Facebook, or AirBnB, who live or die on beating their competition with more helpful, and easier to use, features. The days of asking users, or employees, to put up with fussy software are coming to an end.

Such pressure has brought about a new operating paradigm called DevOps, which, in name and in spirit, combines software development and IT operations into one cohesive workflow. Tightly integrating the development cycle of an application with the subsequent operation of that application can cut the length of time required to update a customer-facing or internal application. About 60 percent of CIOs plan to use DevOps to manage their software, IDC has estimated.

Microsoft has been filling out its portfolio of development software to support devops operations. IBM has set up a special consulting practice just for helping organizations get more into a devops-style workflow.

One user of Microsoft’s DevOps tools has been the business services division of French telecommunications company Orange, which develops systems and software for other organizations.

“A few years ago, it was the norm to deliver good functionality on time and on budget,” said Philippe Ensarguet, CTO at Orange Business Services. “Now, we have to deliver sooner and faster and better.”

One question that dogs the modern business is how to offer something unique in this global, hyper-competitive market. This is where new forms of data analysis could help.

The Data

Data analysis, once chiefly the provider of numbers for PowerPoint presentations and executive dashboards, is increasingly shaping the strategies and operations for many organizations.

Of course, data-guided business decisions are nothing new. What is new is a new depth in the kind of insights that analysis can provide, as well a greater range of data that can be put to computerized scrutiny.

IBM, among other companies, has been ambitiously pursuing the additional ways data can be parsed through cognitive computing, which harnesses techniques of machine learning, neural networks and other approaches to better mimic the ways humans intuit insight from data.

And thanks to the open-source Hadoop data-processing platform, the use of which is growing in the enterprise, additional types of data can be mined for potential knowledge.

Hadoop excels at churning through vast reams of unstructured data, data not stored in a relational database but captured in text files or log files–all the stuff IT staff used to largely ignore, then routinely delete once it filled its coffers. But e-mail, the Web surfing habits of customers or server log files can provide insight into long term trends, daily operations or heretofore undiscovered customer preferences.

One such company that found a competitive edge with such big data, as it is often called, has been enterprise security services company Solutionary, which used a MapR-based Hadoop distribution to enlarge the set of services it offers for its customers.

Solutionary uses Hadoop to store and analyze the security and events logs of its corporate customers, so they can be alerted when suspicious activity may be taking place on their systems. Hadoop allows the company to store more data, at a cost considerably less than if it were to be stored on a data warehouse.

Using this additional data allows the company to offer a longer-view analysis to its customers about what is happening on their networks. It also allows them to perform predictive modeling on the data, potentially giving its customers earlier warning about security issues.

Hadoop allowed Solutionary “to get off an architecture where you had to be careful about what to put into it, and to a model where you could store everything,” said Scott Russmann, Solutionary’s director of software engineering.