Downtime is Too Risky; Follow These 5 Steps to Set Up Your High Availability Plan

5stepsHAplan

Think about how much your business relies on technology – from sales and customer service to strategic planning, accounting and human resources activities – your IT infrastructure is essentially the backbone of your organization. In our always-on society, downtime is simply too risky and the damage it causes goes beyond monetary cost, it also affects the reputation of your business.

As such, your IT system must have a solid High Availability (HA) plan to avoid loss of service by reducing or managing failures and minimizing planned downtime. A good starting point for High Availability planning involves the identification of services that must be available for business continuity. Below are 5 steps you can follow to implement an HA solution.

Step 1 – Scope and Analysis   

High Availability design is often complex and requires knowing a great many areas within IT to get it right.  Consider answering these questions:

  • In the event of a technology failure, what happens to your business?
  • What revenue and productivity losses can the business tolerate, and for how long?
  • How far back do you need to recover data (RPO)?  How quickly do you need to be restored (RTO)?
  • What is the cost of bandwidth, environmental situation, hosting location, management model (internally or externally managed)

Your answers to the above questions will help you better understand your business needs and vulnerabilities. Keep in mind, High Availability is based on proactive thinking. You are ‘planning’ for disaster so you will not have to ‘react’ to it once it occurs.

Step 2 – Design Both Hardware and Software Architecture

High Availability architecture (also known as HA clusters) is groups of computers that support server applications that can be reliably utilized with a minimum amount of down-time. It operates by using high availability software to harness redundant computers in groups or clusters that provide continued service when system components fail.  There are several components that must be carefully taken into consideration when designing your High Availability solution: Environment, Hardware, Software, Data and Network

Step 3 – Planning and Implementation

An effective HA solution comprises 80% planning and 20% implementation. Detailed planning is crucial in order to realize business objectives and achieve user satisfaction. The HA solution must be validated by your business to be deemed effective.

Step 4 – Documentation Development and Delivery

Development of a recovery framework document to provide verifiable information from compliance and audit purposes that contain essential details, such as:

  • Data that has been replicated
  • Procedures and processes
  • Technical specifications
  • Detailed documentation of the High Availability solution
  • Contact information and protocols in the event of an outage

Step 5 – Management and Maintenance

Once your High Availability solution is deployed, it’s important to ensure your business remains protected and available. Determine your enterprise’s staffing, expertise and budget requirements, and then consider to work with either a Managed Services Model to leverage a remotely managed hosting services or and Support Plan Model that tailors to your specific requirements and budgetary goals.

The best High Availability solution for your enterprise is dependent on a number of factors including business type, size, industry and IT drivers, in house technical capabilities, and budget. Blair Technology Solutions works with hundreds of enterprises across Canada to help them develop and implement a HA plan ensuring that the systems critical to their organization will continue to provide optimal service. Contact us today to see how we can help you create customized HA solutions to lower your IT risk.

Advertisements
Posted in Uncategorized | Leave a comment

The First Steps to Begin Your Data-Driven Transformation Journey

The First steps

The Challenges are Real

Today’s businesses are faced with an exponential growth in data that must be stored, protected and managed.  This exploding data presents new and unique operational challenges, especially given the fact that the data loses integrity very quickly.

As businesses engage in the data assessment process to uncover patterns and other telltale symptoms of hidden issues, they are looking for cradle to grave protection and the tools to make the most of the data being collected. However, examining an enterprise’s master data strategy within or across multiple systems to ensure quality and accuracy can be a daunting task. So where should you begin?

The Answer is Simple

The answer is Data Assessment. Data Assessment is a structured approach in developing a deeper understanding of the existing data environment and in creating a plan that optimizes the overall usage of the data.

Taking a hard look at a company’s data often reveals costly issues such as duplicate and misaligned customer, asset, materials related, vendor and financial data. Proactively identifying and correcting these types of data problems can yield millions of dollars in savings for large entities.

How It Works

The ultimate goal in the Data Assessment process is to determine where your company currently stands, to identify the gaps and to provide comprehensive, salient recommendations.

The first step is a Data Discovery Workshop that brings together your IT and line of business professionals with experienced data strategy consultants to engage in both business and technology discussions.  These discussions will help to identify your business objectives, challenges, expected business benefits and values. The goal is to have each party aligned to the use cases within your business that would benefit from modern data analytics.

Once the use case scenarios have been identified in the Data Discovery Workshop, the next stage is to look at your current-state data, architecture, system and user persona in support of the identified use cases. What data do you need, and how do you pair it with specific cognitive technologies?

The final result of the evaluation and analysis will be presented in a high level roadmap with progression steps outlining dates, outcomes and resource plans for data and business transformation.

Get Your Data-Driven Transformation Journey Started

A journey of a thousand steps starts with one step.

A disciplined data optimization strategy will enable you to manage your current capacity and growth requirements, cost effectively manage your IT infrastructure , meet the performance SLA’s of your most mission critical applications, protect the data assets of your organization and archive data for long term retention.

Blair Technology Solutions Inc.’s comprehensive Data Assessment strategy can help you to determine the criteria most important to your organization’s data integrity and usefulness.  This, in turn, can help to guide the development of a successful big data optimization model specifically designed for your organization. Implementation of this model can then drive the savings all organizations are striving to realize.

Contact us today to start your Data-Driven Transformation journey.

Posted in Uncategorized | Leave a comment

8 Important Questions to Ask When Considering a Managed Service Provider (MSP)

msp questions2

Think about how much your business relies on technology.  From sales and customer service to strategic planning, accounting and human resources, your IT infrastructure is essentially the backbone of your organization. As a result, a number of challenges may exist including, but not limited to, finite IT resources, IT infrastructure complexity and difficulty staying current with the latest technologies.

To ensure businesses perform in an uninterrupted and optimal way, many companies are looking to Managed Service Providers (MSP) to support and protect their IT environments. With an experienced Managed Service partner, your organization will benefit from:

  • Access to wide-ranging IT skill sets without incurring the operational hiring / staffing overhead
  • Risk mitigation resulting in avoidance of unplanned downtime and data loss
  • Simple and predictable financial models
  • Peace of mind knowing your IT infrastructure is secure and supported 24/7

Choosing the right MSP is not easy; the screening process can be very complex and time-consuming. The following is a list of important qualification questions that should be asked when selecting an MSP:

  1. Does the MSP have a comprehensive IT infrastructure including current hardware, software, co-location facilities and disaster recovery systems?

Look for a provider that goes beyond simple monitoring and device management. An experienced MSP should offer the ultimate in scalability and flexibility, with a blend of tactical services – like security and software updates – to more growth-focused services, such as managing migration to a cloud platform.

  1. Does the MSP have established relationships with key vendors?

Chances are your IT infrastructure is heterogeneous – hardware, software, network and cloud services are sourced from a variety of vendors. An MSP should, at minimum, have proven experience working with multivendor hybrid environments.  More importantly, an MSP must have strong relationships with leading technology providers in order to develop the technical solutions best fitted to your business.

  1. Can the MSP keep up with evolving technology?

The right technology can be a driving force that gives your organization an edge – reducing costs, saving time and improving operational efficiency. Yet it’s not always easy to evaluate quickly emerging technologies and know what’s best for your business. An expert MSP stays current with technology innovations and is able to offer professional recommendations on which technologies and upgrades your business should consider.

  1. Does the MSP have the ability to support geographically-dispersed businesses?

Many businesses operate beyond a single geographic area. Your employees, customers and partners may be dispersed across the country or around the globe. The end of the work day in one location may be a peak productivity time in another. Your MSP should be “always on” – leveraging remote tools to monitor your applications, servers, databases, networks, virtualization and the cloud – regardless of time zone.

  1. Does the MSP have the ability to quickly scale your services upwards or downwards?

Because your business and IT needs are continually changing, flexibility to add or cut managed services without adding unnecessary cost and complexity to your sourcing strategy is mandatory.

  1. Does the MSP adhere to regulatory compliance standards?

Almost every industry has customer privacy and security compliance regulations. Your MSP must not only be aware of your industry’s compliance and regulatory responsibilities but also have their backup and disaster recovery solutions tailored specifically to meet your compliance requirements.

  1. Does the MSP have a proven model that includes processes such as onboarding, change management and ongoing support?

When it comes to building a long term business relationship with your MSP, systems and processes matter. Consistent and effective processes including onboarding, change management and ongoing support are imperative to success and help to build mutual trust.  Your service provider should be willing to share examples of policy and process documentation and to provide a general overview of their MSP model.

  1. Does the MSP provide a detailed Service Level Agreement that covers every element of their services?

Service Level Agreement (SLA’s) are designed to ensure all parties are aligned when it comes to the scope of work, the quality in which the work will be delivered, and who is responsible for what within the relationship. A strong and comprehensive SLA is essential to the health of any MSP partnership. Is your MSP willing to commit contractually to meeting your service-level requirements and back up those commitments if those SLA’s are not met?

A good, reputable MSP should cover all of these areas during initial discussions, however, if this is not the case, don’t be afraid to ask the questions prior to committing to the partnership.

Contact us today to learn how Blair can address these questions comprehensively to ensure you are on track in choosing the right MSP for your specific requirements.

 

 

 

Posted in Business, Enterprise Technology, Managed Services, MSP, SLA, Technology, Uncategorized | Tagged | Leave a comment

7 Things to Consider Before Migrating to IBM POWER8

Picture1

 

 

 

 

 

 

When it comes to an ideal enterprise platform, you have a long wish list:

  • You want to build a seamless datacentre with the best performance and security at the lower operational cost
  • You want to drive better insights and management to secure your system
  • You want to define gaps and forecast trends to prevent outages and decrease downtime
  • You want to extend your current workloads to the clouds

The list can go on and on; however, something has been holding you back from upgrading to IBM POWER8. Maybe it is the fear of a lengthy and painful migration; maybe it is an uncertainty about whether you have the required IT skill sets; or maybe it is simply because you don’t know where to start.

The following checklist provides you with a guideline to confirm that your infrastructure strategy is correctly tuned to your needs, avoiding potential cost overruns or capability shortfalls.

  1. Determine current and future capacity requirements

Bring your team together, assess your current application workload requirements and your three to five year outlook. Are you giving up some business applications because you aren’t running on the latest technology? Is your current infrastructure able to support more cognitive applications for insights that can transform your organization?

  1. Determine current and future application requirements, especially around Big Data and Analytics

In this data-driven world, you need the ability to understand vast quantities of unstructured information and use it to drive smarter, faster decision-making. As more cognitive applications become available, ensure your  infrastructure  can support them. At the same time, you also need to ensure your transactional systems are as fast and reliable as possible to support your always-on business.

  1. Create a detailed inventory of servers across your entire IT infrastructure 

Chances are your organization has single-application / single–purpose or under-utilized servers in the data centre. With the superior I/O bandwidth and performance in POWER8, you can consolidate more virtual servers on fewer physical server platforms. The direct benefits of higher consolidation include reducing the Total Cost of Ownership (TCO) for the system investment and also the running cost for data centre floor space, as well as power and cooling expenses. The POWER TCO estimator will help you see the compelling financial benefits.

  1. Identify all dependencies for major database platforms, including Oracle, DB2, SAP HANA, and open-source database like EnterpriseDB, MongoDB, Neo4j and Redis

Databases that leverage open source technology to support high transactional volumes are maximized  to run best at an optimal cost on Power System servers. By co-locating your current servers, you’ll reduce expenditure, increase flexibility, stop server sprawl, and finally be able to shift your focus to innovation.

  1. Test your HA/DR strategy and determine whether it meets all corporate and government regulations

Can you afford putting your business at risk of an outage? Don’t find out there’s a problem with your HA/DR plan the hard way: after the fact. Be prepared to implement a system failover strategy when it counts.

  1. Understand current and future data centre environment requirements

You may be unnecessarily overspending on power, cooling and space. Savings here will help your organization avoid costs associated with data centre expansion.

  1. Ensure your investment aligns with your cloud strategy

As you move to the cloud, ensure you have a strong strategy to determine which applications can be moved off-premises. Choose the core platform that offers the most choice, flexibility, and fastest route to the cloud at the lowest cost.

Whether your priorities are performance, flexibility, scalability, openness, security or cost-efficiency, IBM POWER8 covers every base. Contact us today to get a detailed cost analysis for upgrading to POWER8 with Blair’s Managed Services.

Posted in Uncategorized | Leave a comment

5 Methods of Controlling Storage Complexity

storage blog image

 

“Everything should be made as simple as possible. But not simpler – Albert Einstein.”

 

Back in the day, data storage was fairly simple; you chose the media, either disk or tape, captured the data and stored it. You may have needed to perform backup once in a while but that was it. Fast forward to the digital age when data is growing at an exponential speed and data storage has evolved into a complex and chaotic state that is seemingly more and more out of control.

It’s no wonder that a recent study conducted by Loudhouse Research Ltd (commissioned by SUSE, Q4 2016) found that 71% of senior IT decision makers responded that their storage systems were complex, highly fragmented and said they want to “simplify their company’s storage approach” as their No. 1 priority over the next 12 months. The good news is that there are solutions to tackle today’s storage management challenges.

  1. Software-Defined Storage

The idea behind software-defined storage (SDS) is to use computer data storage software for policy-based provisioning and management of data storage independent of the underlying hardware.

Traditional data storage cannot overcome today’s challenges of scale, integration and flexibility. If your solution for managing data growth is simply to buy more storage capacity, sooner or later you’ll be facing dramatically increased costs for both storage and management. Manually managing across heterogeneous storage systems, silos and clouds is not only error-prone but also leads to administrative overhead.

Software-defined storage addresses these challenges by separating the software that provides the intelligence for storage from the traditional hardware platform. The results include easier storage management, lower storage costs and anywhere-anytime access to support cloud storage.

  1. Flash Storage

Flash storage is not a brand new technology; we’ve been using it for years in everyday life – portable USB drives, smartphones, cameras are all examples of flash storage. It is however, only through some amazing advances in the technologies in recent years, that flash has become the default option for enterprise storage solutions.

As IDC’s report notes, “All-flash Arrays (AFAs) were first known for their extraordinary performance; however, AFAs are beginning to be known for their consistent performance.” This has made flash a go-to technology platform for performance intensive workloads such as big data/Hadoop, OLTP, and virtual desktop infrastructure, to name a few. Besides AFAs’ unbeatable performance, the major benefits also include a dramatic consolidation in the amount of physical space consumed in any given data centre. That space reduction, in turn, reduces the number of square feet that needs to be acquired as well as the amount of power required for flash storage.

Whether you’re running multiple applications in a multi-tenancy environment or heterogeneous environments with big data and business critical applications, flash delivers a unique combination of improved business benefits and lower operating expenses.

  1. Object Storage

Originally emerging in the mid 90’s and mainly intended for archiving, Object Storage exploded onto the scene once cloud applications appeared. Instead of using a complex, difficult to manage and antiquated file system, object storage systems leverage a single flat address space that enables the automatic routing of data to the right storage systems.

IDC projects that the total amount of data will grow to 44 zettabytes by 2020, and 80% of that will be unstructured data. Your content is diverse – requiring storage flexibility across private, dedicated and public clouds. And the diverse ways you use your data is just as important to consider as how you store it. Object Storage turns your storage challenges into business advantages by aligning the value of data and the cost of storing it while providing infinite scalability to support the capacity-on-demand capability of cloud storage.

  1. Copy Data Management

Copy data management (CDM) is a trend in the market that continues to accelerate. The basic concept is to allow multiple workflows to access the same data — rather than proliferate independent copies for test/dev, analytics, disaster-recovery tests, e-discovery and more.

It catalogs copy data from across your local and hybrid cloud and off-site cloud infrastructure, identifies duplicates and compares copy requests to existing copies. This ensures that the minimum number of copies is created to service your various business needs.

  1. Tape

Yes, I realize in some cases tape storage is considered antiquated and has been declared “dead” many times in the past. The truth is tape is enjoying a deployment renaissance thanks to the explosion of data volumes and tape’s ultra-low storage costs.

Used as part of a software-defined storage environment that reduces overall storage complexity, tape can be remarkably easy to deploy and manage and provide you with the benefits of efficiency, scalability and security that you simply can’t ignore. As part of a blended storage strategy that also includes disk, flash and cloud, tape can also play an important role in lowering storage costs.

You can’t return to the simple storage methods of the past. You can, however, look at your entire system, figure out ultimately what you’re trying to achieve for your business and build a storage strategy that leverages different technologies/solutions to best support your business objectives.

To start seeing improvements in your storage performance now, contact us today to receive a free storage assessment / consultation that will identify potential issues in your environments.

Posted in storage, Uncategorized | Tagged , , | Leave a comment

Does Your Cloud Strategy Align With Tech’s Biggest Players?

blair-cloud-imageIt’s hard to believe that even though public cloud computing services were introduced more than a decade ago, and seven years since private cloud services came into the market, many organizations are still confused over where and how to use these services.

When looking at your cloud strategy, it’s a great idea to take a look at what five of tech’s biggest players are doing to help you identify if your strategy aligns with their direction. As these top 5 have just announced their strategies for 2017, here’s my overview of what each of them has to offer.

IBM’s very real turnaround

Despite reporting their 19th consecutive quarter of declining revenues, and fifth year of declining full-year revenues, Big Blue’s turnaround is real.

Since 2012, IBM has been letting its older businesses peter out, while pouring billions into cloud and mobile computing, data analytics, social and security software and artificial intelligence. While these new business revenues have not yet overtaken the old, they are close.

The company’s strategic growth areas now represent 41% of total revenues, ahead of their expectations. They forecasted reaching 40% by 2018, but they could reach 50% later this year.

Yes, they’ve struggled as the enterprise has replaced IBM data centre hardware with subscription-based cloud computing services offered by Amazon and Google who were made for the cloud. But things are looking good. In Q4, IBM reported their cloud business grew 35%.

With IBM CEO Ginni Rometty setting her sights on further development in cognitive computing, it will surely be interesting to see how the company ties cognitive capabilities into their cloud platform and what that will mean for the competition and their cloud innovation in the future.

Amazon’s aggressive growth

Amazon owns somewhere between 80% and 85% of the public cloud market. Serving the needs of enterprise IT and ordinary consumers, they are taking the lead in innovation and expansion.

They’re reporting they introduced over a thousand new services and features last year, including a handful in artificial intelligence. Not content with relying on UPS and FedEx, Amazon is building out its delivery infrastructure by building new air and ocean hubs, expanding its fleet of cargo planes and trucks, as well as sorting and distribution facilities.

Becoming a global transportation and eCommerce giant will come with massive innovation, as they continue to disrupt and redefine industry after industry with their considerable cloud dominance.

Microsoft’s Azure gains traction  

Microsoft is second to Amazon in the cloud, and they are reporting that their Azure cloud business doubled last year along with the market penetration of Azure.

It’s not a profitable business yet. They earn better margins from Office, Windows, PCs and Xboxes. In fact the only year-over-year increase Microsoft reported last month in those four areas was a 10% bump in their Productivity and Business Product segment, which includes Office.

As the PC market continues to shrink, Microsoft will have to take a bigger ownership of the public cloud market than the 10-15% they have now. With Azure adoption progressing at a rapid pace, they will continue to be a go-to vendor for the enterprise.

Alphabet-Google promises cloud innovation

With around a 5% stake in the cloud, Google parent Alphabet has designs on a bigger share. They plan to open more data centres with the promise of something more than simple server rental.

Expect more details on their innovations in artificial intelligence and machine learning, test driven and bankrolled by their web advertising business (the world’s largest advertising business by the way), which has continued to grow for 20 consecutive quarters.

For those who believe there’s a ceiling on their google ad revenues, the company is reporting its cloud, app-store and hardware businesses revenues grew 62%. Their pockets are as deep as their penetration, and they are going to put both to work.

Intel’s nervous bragging rights

The chip maker generates 30% of its revenue in data centres, which has helped while the PC business steadily declines. But as the data centre business faces a low-growth future, Intel will struggle. Their revenues in this category grew in the single digits last year, compared with 11% the year before and 18% the year before that.

Intel still owns about 97% of the server market. Their chips power the servers of Amazon, Microsoft and Google. They can brag 30% growth in cloud computing sales last year.

But they realize they have to innovate. The cloud is going to demand performance not available right now, and chip makers Qualcomm, Cavium and Advanced Micro Devices are challenging Intel’s Goliath-like monopoly.

There is no doubt that this is going to be a big year in the cloud business, so ensuring you know what each of these top players has to offer will help you identify who you might want at your table. Consider speaking to a professional IT services company with expertise in the cloud to help with implementation.

What is your plan for the cloud? How will you take advantage of the rapid innovation it offers?    

 

Posted in Amazon, Cloud Computing, Google, IBM, Intel, IT, Microsoft, Technology, Uncategorized | Tagged , , , , | Leave a comment

Enterprise IT Disrupted by Digital Transformation

digital blog image.jpgHave you heard? Traditional enterprise IT is over, and digital enterprise IT is in. Some call it a revolution, a new era, a third wave. I call it today’s reality. The way we deploy and maintain technology won’t meet the demands placed on us for agility and innovation in the digital ecosystem.

IT professionals are hard-wired to think marathon, not sprint. But that’s changing. To a business that needs to bound forward, process optimization feels slow and incremental. Traditional planning cycles are too long for must-have digital initiatives. Siloed design, development, testing, deployment and operations are going to have to come together in a new model.

Digital-native companies, from Google to Uber, are changing the game in every industry, making CMOs anxious for a breakthrough idea. In response, CIOs are moving their organizations through digital transformation, supported by players like Microsoft, IBM, Dell, Cisco and SAP who are bringing forward products designed as digital transformation enablers.

Some companies will create a radical reimagined business to disrupt an industry, but most will simply use technology better to gain a better edge. Either way, IT teams will need to rebuild their structures to be more responsive to shifting priorities.

Dell recently surveyed 4,000 IT and business makers, and found 45 per cent are worried about becoming obsolete within the next three to five years. Almost half don’t know what their industries will look like in three years, and over three quarters feel threatened by digital startups.

Pretty scary stuff, but don’t buy into the hype that we’re on the brink of a revolution. Enterprise IT has always been in the business of giving companies an advantage in a changing competitive landscape. It’s just that today’s challenges and opportunities demand something different from us. Which means we have to be different in the year ahead.

On that note, let me wish you all the best in the new year. I hope it’s successful, prosperous, and everything you want it to be.

What is your forecast for 2017? Will it mean radical change for your business, or an escalating program of strategic imperatives?

Posted in Enterprise Technology, IT | Tagged , | Leave a comment