Saturday, 31 October 2015

2015 technology industry graveyard

2015 technology industry graveyard
Cisco, Microsoft, Google and others bury outdated technologies to move ahead with new ones.

Ba-bye
The Technology Industry Graveyard is pretty darn full in 2015, and we’re not even including the near-dead such as RadioShack and Microsoft’s IE browser. Pay your respects here…

GrooveShark
The self-described “World’s Music Library” is no more after shutting down in April in the wake of serious legal pressure by music companies whose songs GrooveShark allowed to be shared but had never licensed. Apple and Google had each kicked GrooveShark out of their app stores years ago due to complaints from music labels. Much more sadly than the 9-year-old company’s demise, however, was the death of co-founder Josh Greenberg in July at the age of just 28.

Typo iPhone keyboard
Not even the glamor of being co-founded by American Idol host Ryan Seacrest could help Typo Innovations save its iPhone keyboard, which BlackBerry said infringed on its patents. So instead, Typo bailed on the iPhone model and settled for selling ones for devices with screens 7.9-inches or larger (like iPads).

Amazon Fire Phone
With a product name like Fire, you’re just asking for colorful headlines if it bombs. And indeed, Amazon has stopped making its Fire Phone about a year after introducing it and media outlets were quick to highlight the company “extinguishing” it or remarking on the phone being “burnt out.” Amazon has had some success on the hardware front, namely with its Kindle line, but the Fire just didn’t distinguish itself and was going for free with a carrier contract by the end.

Interop New York
Interop Las Vegas carries on as one of the network industry’s top trade shows next May, but little sibling Interop New York is no more this year. The Fall show, traditionally held at the Javits Center since 2005, was always smaller and was discontinued for 2015 despite lively marketing material last year touting “More Than 30 Interop New York Exhibitors and Sponsors to Make Announcements in Anticipation of the Event.”

GTalk
Google ditched so many things in 2015 that we devoted an entire slideshow to Google’s Graveyard. So to choose just one representative item here, we remember Google Talk, which had a good run, starting up in 2005. But it’s never good when Google pulls out the term “deprecated” as it did in February in reference to this chat service’s Windows App. Google said it was pulling the plug on GTalk in part to focus on Google Hangouts in a world where people have plenty of other ways to chat online. However, Google Talk does live on via third-party apps.

Cisco Invicta storage products
Cisco has a good touch when it comes to acquisitions, but its $415 mlllion WHIPTAIL buyout from 2013 didn’t work out. The company in July revealed it had pulled the plug on its Invicta flash storage appliances acquired via that deal. It’s not unthinkable though that Cisco could go after another storage company, especially in light of the Dell-EMC union.

RapidShare
The once-popular file hosting system, begun in 2002, couldn’t withstand the onslaught of competition from all sides, including Google and Dropbox. Back in 2009, the Switzerland-based operation ran one of the Internet’s 20 most visited websites, according to Wikipedia. It shut down on March 31, and users’ leftover files went away with it.

Windows RT devices
This locked-down Microsoft OS for tablets and convertible laptops fared about as well as Windows 8, after being introduced as a prototype in 2011 at the big CES event in Las Vegas. Microsoft’s software for the 32-bit ARM architecture was intended to enable devices to exploit that architecture’s power efficiency, but overall, the offering proved to be a funky fit with existing Windows software. Production of RT devices stopped earlier in 2015 as Microsoft focuses on Win10 and more professional-focused Surface devices.

OpenStack vendor Nebula
As Network World’s Brandon Butler wrote in April, Nebula became one of the first casualties of the open source OpenStack cloud computing movement when it shuttered its doors. The company, whose founder was CIO for IT at NASA before starting Nebula in 2011, suggested in its farewell letter that it was a bit ahead of its time, unable to convert its $38 million in funding and hardware/software appliances into a sustainable business.

FriendFeed
Facebook bought this social news and information feed aggregator in 2009, two years after the smaller business started, and then killed it off in April. People have moved on to other means of gathering and discovering info online, so FriendFeed died from lack of use. It did inspire the very singular website, Is FriendFeed Dead Yet, however, so its legacy lives on.

Apple Aperture
Apple put the final nails in its Aperture photo editing app in 2015, ending the professional-quality post-production app’s 10-year run at Version 3.6. In its place, Apple introduced its Photos app for users of both its OS X Mac and iOS devices.

Secret
One of the co-founders of anonymous sharing app shared this in April: The company was shutting down and returning whatever part of its $35 million in funding was left. The company’s reality was just not going to meet up with his vision for it, said co-founder David Byttow. The company faced criticism that it, like other anonymous apps such as Yik Yak, allowed for cyberbullying.

Amazon Wallet
Amazon started the year by announcing its Wallet app, the company’s 6-month-old attempt to get into mobile payments, was a bust. The app, which had been in beta, allowed users to store their gift/loyalty/rewards cards, but not debit or credit cards as they can with Apple and Google mobile payment services.

Circa News app
Expired apps could easily fill an entire tech graveyard, so we won’t document all of their deaths here. But among them not making it through 2015 was Circa, which reportedly garnered some $4 million in venture funding since starting in 2012 but didn’t get enough takers for its app-y brand of journalism.


Tuesday, 27 October 2015

Aruba succeeded where other Wi-Fi companies failed: A talk with the founder about the acquisition by HP, the future of Wi-Fi

Wireless LAN stalwart Aruba was acquired by HP last March for $3 billion, so Network World Editor in Chief John Dix visited Aruba co-founder Keerti Melkote to see how the integration is going and for his keen insights on the evolution of Wi-Fi. Melkote has seen it all, growing Aruba from a startup in 2002 to the largest independent Wi-Fi company with 1,800 employees. After Aruba was pulled into HP he was named CTO of the combined network business, which employs roughly 5,000. In this far ranging interview Melkote talks about product integration and rationalization, the promise of location services and IoT, the competition, the arrival of gigabit Wi-Fi and what comes next.

Why sell to HP?
Aruba was doing really well as a company. We gained market share through every technology transition -- from 802.11a to “b” to “g” and "n" and now “ac” -- and today we’re sitting at roughly 15% global share and have a lot more than that in segments like higher education and the federal market. But we were at a point where we could win more if we had an audience at the CIO level, and increasingly we were getting exposed to global projects that required us to have a large partner in tow to give us the people onsite to execute on a worldwide basis.

So we began looking for what internally we called a big brother to help us scale to that next level. We talked to the usual suspects in terms of professional services, consulting companies, etc., but then HP approached us and said they were interested in partnering with us to go after the campus market, which is changing from wired to wireless.

HP has a good history on the wired side, so we felt this was an opportune moment to bring the sides together, but go to market with a mobile-first story. After all, as customers re-architect their infrastructure they’re not going with four cable drops to every desk, they’re looking at where the traffic is, which is all on the wireless networks these days. HP agreed with that and basically said, “Why don’t you guys come in and not only grow Aruba, but take all of networking within HP and make it a part of the whole ecosystem.”

So HP Networking and Aruba have come together in one organization and Dominic Orr [formerly CEO of Aruba] is the leader for that and I am Chief Technology Officer. We are focusing on integrating the Aruba products with the HP network products to create a mobile-first campus architecture.

Does the Aruba name go away and does everyone move to an HP campus?
No, and there is some exciting news there. The go-forward branding for networking products in the campus is going to be Aruba, including the wire line products. Over time you will start to see a shift in this mobile-first architecture with Aruba switching also coming to market.
Think Big. Scale Fast. TRANSFORM. Enter Blue Planet.nagement...

Will that include the HP Networking operations in the area?
No, we have a global development model, so we have development sites here in Sunnyvale, Palo Alto and Roseville. And we have sites in India, China, Canada and in Costa Rica. There won’t be any changes to any of the development sites. As the business grows we’re going to have to grow most of those sites.

HP has bought other wireless players along the way, including Colubris and 3Com, so how does it all fit together?
Colubris was a pretty focused wireless acquisition back in 2008 and those products have done well for HP, but that customer base is ready for upgrades to 11ac and as they upgrade they will migrate to Aruba. The former product line will be end-of-lifed over time, but we’re not going to end support for it. There is a small team supporting it and will continue to do so until customers are ready to migrate.

3Com was a much broader acquisition, involving data center campus products, routing, etc. Most of the R&D for 3Com is in China with H3C [the joint venture 3Com formed with Huawei Technologies before 3Com was acquired by HP in 2010]. There is a two-prong go-to-market approach for those products. There is a China go-to-market, which has done really well. In fact, they are number one, even ahead of Cisco, from an overall network market share perspective in China. For the rest of the world we were using the products to go after the enterprise.

As you probably heard recently, we are going to sell 51% of our share in H3C to a Chinese owned entity because there needs to be Chinese ownership for them to further grow share. H3C will be an independent entity on the Chinese stock market and will sell networking gear in China and HP servers and storage as well.

So that becomes our way to attack the China market while we will continue to sell the other network products to the rest of the world. Those products are doing very well, especially in the data center. They run some of the largest data centers in the world, names that are less familiar here in the U.S., but very large data centers for the likes of Alibaba, Tencent and other companies that are basically the Amazons and Facebooks of China.

3Com has a wireless portfolio called Unified Wireless. That product line will also be end-of-lifed but still supported, and as we migrate to next-generation architectures we will position Aruba for those buyers. The definitive statement we’ve made is Aruba will be the wireless LAN and mobility portfolio in general and Hewlett-Packard’s network products will be the go-forward switching products.

Two products that are really helping to integrate our product lines are: ClearPass, which is our unified policy management platform, which is going to be the first point where access management is integrated between wired and wireless; and AirWave, which is the network management product which will become the single console for the customer to manage the entire campus network. For the data center we will have a different strategy because data center management is about integrating with servers and storage and everything else, but for the campus the AirWave product will be the management product.

3Com has a product called IMC Intelligent Management Console that will continue if customers need deep wired management, but if you need to manage a mobile-first campus, AirWave will do the complete job for you.

Given your longevity and perspective in the wireless LAN business, are we where you thought we would be in terms of Wi-Fi usage when you first started on this path 13 years ago?
It’s taken longer than I thought it would, but it has certainly far surpassed my expectations. Back in 2002 there was no iPhone or iPad. Wireless was for mobile users on laptops and we believed it would become the primary means of connecting to the network and you would no longer need to cable them in. That was the basic bet we made when we started Aruba. My hope was we would get there in five to seven years and it took 15, but things always take a little bit longer than you think.

The seminal moment in our business was the introduction of the iPad. Even though the iPhone was around most people were still connecting to the cellular network and not Wi-Fi because of the convenience. Laptop-centric networking was still prominent, but when the iPad arrived there was no way to connect it to the wire and there were all sorts of challenges. How do you provide pervasive wireless connectivity, because the executives that brought them in were taking them along wherever they went. Security was a big challenge because they were all personal devices.

We had developed and perfected answers for those questions over the years so it was all sort of right there for us. And the last five years has seen dramatic changes in terms of all-wireless offices, open office space architectures, etc. Microsoft Lync was also a big inflection point as well.

Why is that?
Whenever I talk to customers about pulling the cable out they always point to the phone and say, “I still need to pull a cable for that, which means I need power over Ethernet, I need an Ethernet switch in the closet, I need a PBX.” But when Lync was introduced in 2013 you could get your unified communications on your smart phone. Today, if you were to ask what is the most important device on the network, I’d say it’s the smart phone because it’s converging the computing and messaging and everything else on one device. Now you can provide a rich experience on a mobile device and do it anywhere, anytime.

Where do we stand on location-based services?
We’ve been talking about location services for a very long time. What happened was Wi-Fi based location alone wasn’t actually solving the problem. It was giving you a sense of where people were in a facility, but getting the technology to allow you to engage with somebody in physical space was not working, mostly because the operating systems on those mobile devices weren’t supporting Wi-Fi for location, just connectivity.

We have now integrated Bluetooth Low Energy (BLE) into our portfolio so you have two ways of connecting with the user; the Wi-Fi side gives you presence and Bluetooth Low Energy gives you the ability to engage on the user side so you can send notifications about where they are. That technology lets us provide tools for marketers, for retailers to send coupons, invite people into a store, and so on.

So it is finally picking up some?
It is. Actually Asia is doing well. There is a lot of construction in Asia and this is one of the demands. But the U.S. is picking up. We just implemented a large network at the Levi’s Stadium right down the street here [which recently replaced Candlestick Park as home of the San Franciso 49ers].

One of the things the CEO imagined was that, as you drive from home to the game, their app would guide your experience. So they’ll take you to the right parking lot, then provide you directions to your seat, and once you are in the seat enjoying the game they wanted to provide amenities -- so food and beverage ordering and the ability to watch instant replays and the like. All these things are available for a fee of course. In the first season of operation this app generated $2 million of additional sales for Levi’s Stadium.

That was a big win for us, not just for demonstrating high density Wi-Fi where we have seen regularly 3-4 gig of traffic going to the Internet, but also showing the revenue generating potential of location-based technology.

Speaking of networking a lot of things, what do you make of the Internet of Things movement?
Eventually where it all goes is integrating the Internet of Things. Every day I interact with customers there are new use cases coming up around the intersection of location-based technology and the Internet of Things. And that’s squarely in the purview of what we are doing. It’s not today. Today is still about this all-wireless workplace, but in the next five years I think you’ll see a lot more of this. There is a lot of innovation still to come.

There’s a hodgepodge of stuff used to connect sensors today, but you see Wi-Fi playing a prominent role?
Wi-Fi will definitely be an integral component, but Bluetooth Low Energy will also be important because some sensors will be battery operated. There may be a role for the evolution of ZigBee as well. That’s super low energy. ZigBee is not yet in the mainstream enterprise but I can see some successor of that happening. But sensors will look to wireless for connectivity because they need to go anywhere. You can’t have cable follow them. So the wireless fabric is becoming super-critical for that.

Switching gears a bit, how is competition changing?
We look at three key market segments: large and medium enterprises; small/medium businesses, which have completely different characteristics; and service providers. Aruba has done really well in the large and medium enterprise segment. We have done reasonably well in the small/medium segment, but there is more competition there. Ruckus has done well there. And service provider is the emerging battleground.

As a standalone company Aruba couldn’t afford to invest, frankly, in all three segments. We were focused on the large and medium enterprise and we built a good franchise. Clearly Cisco is the primary competitor there, but now as part of HP we have another go-to-market capability and investment to take on all three segments in a meaningful way, so that’s another big reason why we came together.

We just recently announced a partnership with Ericsson to go after the service provider Wi-Fi segment, and that will help us gain share. And HP has been a strong player in the small/medium business so we’re going to take Aruba down-market. We’re going to play in all three segments. I feel if we just keep executing, market share gains are possible.

Ruckus talks about optimizing the airwaves as being their key differentiator. How do you differentiate Aruba?
The four key things I talk about are the emergence of the all-wireless workplace, inflight communications and voice, the need for deep security within your own device, and the need for location based services and training towards IoT.

We talked about the all-wireless workplace and location services. Regarding voice traffic, we have invested quite a bit of energy ensuring optimal utilization. Ruckus focused on the antenna technology, while we are focused on the software that goes on top of the antenna. The analogy I’ll give you is, as you walk away from an access point I can boost my antenna power to give you a better signal, and that problem is a good problem to solve if you’re in a home because you only have one access point. But in the enterprise there is a collection of access points and the problem isn’t about holding onto a client for as long as possible, but to move the client to the best access point. So the trick is to enable the client to roam from one access point to another in a very efficient way. We call this technology ClientMatch. That is the core differentiator for us over the air, and we’ve specifically optimized it for voice by working with the Microsoft team to enable Lync and Skype for Business.

Security is a place we cannot be touched. We’ve had deep security expertise for a very long time. The DoD, three of the armed forces, most of the federal market actually, uses Aruba. I can’t get into all the details, but we have significant penetration because of our security depth. For enterprises that is a big deal. They really want to make sure the security side is well covered.

What’s the hot button in wireless security today?
We know how to encrypt. We know how to authenticate. Basically it is the threat of an unmanaged device coming into the network. We’re looking at solving that problem as a mobile security problem and we solved one part of it with access management, but we have this Adaptive Trust architecture which integrates with mobile device management tools -- VMware’s AirWatch, MobileIron, Microsoft’s Intune. We partner with those companies and the likes of Palo Alto Networks, and HP now brings its security and management platform ArcSight to the table. The idea is to secure the mobile edge so no matter where you are you have a secure connection back to the enterprise.

Let’s shift to the adoption of Gigabit Wi-Fi, or 802.11ac. How is that transition going?
The campus access network from your desktop to the closet has stagnated for a long time. That’s because there was really nothing driving the need for more than a gigabit’s worth of bandwidth to the desktop. Now with Gigabit Wi-Fi technologies the over the air rates are greater than if you were to connect to the wired LAN. So if you deploy Gigabit Wi-Fi and have signals going at 2G, let’s say, the wired line becomes a bottleneck. There is a technology called Smart Rate that HP Networking introduced for its switches which allows you to raise the data rates to 2.5Gbps and even 5Gbps. At that point your access points don’t have to contend with the bottleneck and can pick up the bits over the air and put them on the wire without dropping them.

So you need will need wired ports faster than a gigabit as you transition to this mobile workplace, but you won’t need as many ports as before. That is a transition, I think, that will happen over the next 2-3 years.

Did many people buy into Wave 1 of Gigabit Wi-Fi or did they hold off?
We’ve had tremendous success with Wave 1. The need for bandwidth is truly insatiable. And there is a ton of demand still yet to be put on the network. Video is a significant driver of bandwidth and most companies are throttling it video. So the more you open the pipe, the more capacity I think people will consume. Wave 1 has done very well. I think Wave 2 will continue to do well and then there’s .11ax which will take capacity even higher.

So people bought into Wave 1 even though Wave 2 requires them to replace hardware?

I tell customers, if you’re going to wait for the next best thing you’re going to wait forever, because there’s always going to be the next best thing on the horizon. So it’s really a question of where you are in your lifecycle for an investment. If the customer is at a point where they’ve had five years of investment and they’re hurting, it’s a good time. Wave 1 can actually solve a lot of problems. There’s no need to wait another 18 months for Wave 2 technology. You know you’re going to refresh that too in five years and there will be new technology at that point in time.

Will anybody buy anything but Wave 2 at this point?
It depends. Wave 1 technology you can buy today at multiple price points in the industry. Wave 2 is still at the very top end of the range. So if you’re looking for, let’s say, lighting up a retail store and you don’t need all the capacity of Wave 2, then Wave 1 will do just fine. That’s typical of most technologies, to start at the top and eventually work its way down. We’re right in the beginning of the Wave 2 transition.

How about in carpeted office space? Would you just drop Wave 2 into key points to satisfy demand?
Wi-Fi has always basically been single user. Only one user could speak on a wireless LAN at a time. With Wave 2 you can have multiple conversations at the same time; each access point can serve four streams. So that boosts capacity in a significant way and can also improve spectrum efficiency. For that reason alone, I think Wave 2 should be used pretty much anywhere you go. You could start with a high density zone and then work your way up. That’s typically how people do it, but I would encourage most customers to take advantage of this technology.

In the industry we’ve always used speed as a measure of the next generation of technology. Never have we given attention to efficiency. This is the first time where we’re saying efficiency gains are pretty significant.

And Wave 2 ultimately will be able to support up to eight streams, right?
Yes, the technology allows you to do eight streams, although it is not possible to pack eight antennas into the form factor at this point. But it will come.

I think the targets are up to 10 gig. Let’s see how far they get. At that point, the Gigabit Ethernet backhaul will become an even more interesting problem. You’ll need 10 gig of backhaul from the access point.

In terms of the coming year, what should people look for?
They should expect a streamlined roadmap with unified management for wired and wireless, and unified security for wired and wireless in the campus. And they should expect changes in wiring closet switches to support Wave 2.

The other piece cooking in the labs is the next-generation controller technology. We invented the controller back in 2002 and that has gone through multiple generations of upgrades. The first controller had something like a 2Gig back plane that could support 1,000 users, and now we have a 40G controller that supports 32,000 users. So how do you get from there to 500G? That will require us to rethink architecture because these campuses are getting there.

We used to talk about tens of thousands of devices on a campus. Today campuses have hundreds of thousands of devices. How do you support them in a single architecture? Right now you add more controllers, but that creates a management problem. We are working on a unified solution for very large campuses and taking it to the next level for service providers as well.

Tuesday, 13 October 2015

70-332 Advanced Solutions of Microsoft SharePoint Server 2013

QUESTION 01
You need to ensure that the developers have the necessary permissions to meet the BCS model
requirements. What should you do?

A. Grant Edit permissions to the developers by using the Set Object Permissions option.
B. Grant Execute permissions to the developers by using the Set Object Permissions option.
C. Grant Edit permissions to the developers by using the Set Metadata Store Permissions option.
D. Grant Execute permissions to the developers by using the Set Metadata Store Permissions
option.

Correct Answer: C

QUESTION 02
You need to configure Excel Services. What should you do?

A. Add a trusted file location to the Certkingdom360 site.
B. Add each user as a Viewer.
C. Add each user as a Contributor.
D. Add a trusted data connection library to the Certkingdom360 site.

Correct Answer: A

QUESTION 56
You need to configure the BCS model to access data. What should you do?

A. Create an external content type and enter the target application friendly name in the Secure
Store Application ID field
B. Create an external content type and enter the target application ID in the Secure Store
Application ID field.
C. Create an external content type and choose the Connect with impersonated custom identity
option. Enter the target application friendly name of the Secure Store target application.
D. Create an external content type and choose the Connect with user's identity option.

Correct Answer: B

QUESTION 03
You need to meet the site availability requirements. What should you do?

A. Configure each web server as a node of a Network Load Balancing (NLB) cluster.
B. Create an alternate access mapping entry for each server
C. Create client-side host entries to point to specific servers.
D. Create Request Management rules to route traffic to each server.

Correct Answer: A

Friday, 9 October 2015

70-247 Configuring and Deploying a Private Cloud with System Center 2012

QUESTION 1
You have a System Center 2012 Virtual Machine Manager (VMM) infrastructure that contains a
server named Server1. Server1 hosts the VMM library. You add a server named Server2 to the
network. You install the Windows Deployment Services (WDS) server role on Server2. You have the
Install.wim file from the Windows Server 2008 R2 Service Pack 1 (SP1) installation media. You need
to install Hyper-v hosts by using the bare-metal installation method. What should you do first?

A. Add Install.wim to the VMM library.
B. Convert Install.wim to a .vhd file.
C. Convert Install.wim to a .vmc file.
D. Add Install.wim to the Install Images container.

Answer: B


QUESTION 2
You have a System Center 2012 Virtual Machine Manager (VMM) infrastructure that contains a
visualization host named Server2. Server2 runs Windows Server 2008 R2 Service Pack 1 (SP1).
Server2 has the Hyper-V server role installed. You plan to deploy a service named Service1 to
Server2. Service1 has multiple load-balanced tiers. You need to recommend a technology that must
be implemented on Server2 before you deploy Service1. What should you recommend?

A. MAC address spoofing
B. the Network Policy and Access Services (NPAS) server role
C. TCP Offloading
D. the Multipath I/O (MPIO) feature

Answer: A


QUESTION 3
Your network contains a server named Server1 that has System Center 2012 Virtual Machine
Manager (VMM) installed. You have a host group named HG1. HG1 contains four virtualization
hosts named Server2, Server3, Server4, and Servers. You plan to provide users with the ability to
deploy virtual machines by using the Self-Service Portal. The corporate management policy states
that only the members of a group named Group1 can place virtual machines on Server2 and
Server3 and only the members of a group named Group2 can place virtual machines on Server4
and Server5. You need to recommend a cloud configuration to meet the requirements of the
management policy. What should you recommend?

A. Create two clouds named Cloud1 and Cloud2. Configure the custom properties of each cloud.
B. Create a host group named HG1\HG2. Create one cloud for HG1 and one cloud for HG2. Move
two servers to HG2.
C. Create two clouds named Cloud1 and Cloud2. Configure placement rules for HG1.
D. Create two host groups named HG1\Group1 and HG1\Group2. Create one cloud for each new
host group. Move two servers to each host group.

Answer: D


QUESTION 4
Your company has a private cloud that contains 200 virtual machines. The network contains a
server named Server1 that has the Microsoft Server Application Virtualization (Server App-V)
Sequencer installed. You plan to sequence, and then deploy a line-of-business web application
named App1. App1 has a Windows Installer package named Install.msi. App1 must be able to store
temporary files. You need to identify which task must be performed on Server1 before you deploy
App1. What task should you identify?

A. Modify the environment variables.
B. Add a script to the OSD file.
C. Compress Install.msi.
D. Install the Web Server (IIS) server role.

Answer: D
QUESTION 174
Your company has three datacenters located in New York, Los Angeles and Paris. You deploy a
System Center 2012 Virtual Machine Manager (VMM) infrastructure. The VMM infrastructure
contains 2,000 virtual machines deployed on 200 Hyper-V hosts. The network contains a server
named DPM1 that has System Center 2012 Data Protection Manager (DPM) installed.
You need to recommend a solution for the infrastructure to meet the following requirements:
* Automatically backup and restore virtual machines by using workflows.
* Automatically backup and restore system states by using workflows.
What should you include in the recommendation? (Each correct answer presents part of the
solution. Choose two.)

A. Deploy System Center 2012 Orchestrator.
B. Install the Integration Pack for System Center Virtual Machine Manager (VMM).
C. Install the Integration Pack for System Center Data Protection Manager (DPM).
D. Deploy System Center 2012 Operations Manager.
E. Deploy System Center 2012 Service Manager.

Answer: AB


QUESTION 5
You are the datacenter administrator for a company named CertKingdom, Ltd. The network contains a
server that has System Center 2012 Virtual Machine Manager (VMM) installed. You create four
private clouds. Developers at CertKingdom have two Windows Azure subscriptions. CertKingdom creates a
partnership with another company named A.Datum. The A.Datum network contains a System
Center 2012 Virtual Machine Manager (VMM) infrastructure that contains three clouds.
Developers at A.Datum have two Windows Azure subscriptions. You deploy System Center 2012
App Controller at A.Datum. You plan to manage the clouds and the Windows Azure subscriptions
for both companies from the App Controller portal. You need to identify the minimum number of
subscriptions and the minimum number connections required for the planned management. How
many connections and subscriptions should you identify?

A. Two connections and four subscriptions
B. Two connections and two subscriptions
C. Four connections and four subscriptions
D. Eight connections and four subscriptions
E. Four connections and two subscriptions

Answer: A


QUESTION 6
Your network contains an Active Directory forest named CertKingdom.com. The forest contains a System
Center 2012 Operations Manager infrastructure. Your company, named CertKingdom, Ltd., has a partner
company named
A. Datum Corporation. The
A. Datum network contains an Active Directory forest
named adatum.com. Adatum.com does not have any trusts. A firewall exists between the
A. Datum
network and the CertKingdom network. You configure conditional forwarding on all of the DNS servers
to resolve names across the forests. You plan to configure Operations Manager to monitor client
computers in both of the forests. You need to recommend changes to the infrastructure to monitor
the client computers in both of the forests. What should you include in the recommendation? (Each
correct answer presents part of the solution. Choose two.)

A. Allow TCP port 5723 on the firewall.
B. Deploy a gateway server to adatum.com.
C. Create a DNS zone replica of adatum.com.
D. Allow TCP port 5986 on the firewall.
E. Create a DNS zone replica of CertKingdom.com.
F. Deploy a gateway server to CertKingdom.com.

Answer: AB

Thursday, 17 September 2015

More automation, fewer jobs ahead

Internet of Things in 2025: The good and bad

Within 10 years, the U.S. will see the first robotic pharmacist. Driverless cars will equal 10% of all cars on the road, and the first implantable mobile phone will be available commercially.

These predictions, and many others, were included in a World Economic Forum report, released this month. The "Technological Tipping Points Survey" is based on responses from 800 IT executives and other experts.

A tipping point is the moment when specific technological shifts go mainstream. In 10 years, many technologies will be widely used that today are in pilot or are still new to the market.
INSIDER: 5 ways to prepare for Internet of Things security threats

The Internet of Things will have a major role. Over the next decade there will be one trillion sensors allowing all types of devices to connect to the Internet.

Worldwide, the report estimates, 50 billion devices will be connected to the Internet by 2020. To put that figure in perspective, the report points out, the Milky Way -- the earth's galaxy -- contains about 200 billion suns.

The ubiquitous deployment of sensors, via the Internet of Things, will deliver many benefits, including increases in efficiency and productivity, and improved quality of life. But its negative impacts include job losses, particularly for unskilled labor as well as more complexity and loss of control.

Robotics, too, will be a mixed bag. It will return some manufacturing back to the U.S., as offshore workers are replaced with onshore robots. But robotics -- including the first robotic pharmacist -- will result in job losses as well.

There's concern that "we are facing a permanent reduction in the need for human labor," said the report.

That may still be an outlier view. Efficiency and productivity gains have historically increased employment. But a shift may be underway.

"Science fiction has long imagined the future where people no longer have to work and could spend their time on more noble pursuits," the report said. "Could it be that society is reaching that inflection point in history?"

That question doesn't have a clear answer. The Industrial Revolution destroyed some jobs but created many more, the report points out. "It can be challenging to predict what kinds of jobs will be created, and almost impossible to measure them," the report notes.

Other predictions included:
Driverless cars will make up one in 10 of the vehicles on the road, and this will improve safety, reduce stress, free up time and give older and disabled people more transportation options. But driverless vehicles may also result in job losses, particularly in the taxi and trucking industries.

One in 10 people will be wearing connected clothing in 10 years. Implantable technologies will also be more common, and may be as sophisticated as smartphones. These technologies may help people self-manage healthcare as well as lead to a decrease in missing children. Potential negatives include loss of privacy and surveillance issues.
The forecasters were bullish on vision technologies over the next decade. This is tech similar to Google glass that enhances, augments and provides "immersive reality." Eye tracking technologies, as well, will be used as a mean of interaction.

Unlimited free storage that's supported by advertising is expected by 2018.


Saturday, 5 September 2015

Microsoft, U.S. face off again over emails stored in Ireland

The company has refused to turn over to the government the emails stored in Ireland

A dispute between Microsoft and the U.S. government over turning over emails stored in a data center in Ireland comes up for oral arguments in an appeals court in New York on Wednesday.

Microsoft holds that an outcome against it could affect the trust of its cloud customers abroad as well as affect relationships between the U.S. and other governments which have their own data protection and privacy laws.

Customers outside the U.S. would be concerned about extra-territorial access to their user information, the company has said. A decision against Microsoft could also establish a norm that could allow foreign governments to reach into computers in the U.S. of companies over which they assert jurisdiction, to seize the private correspondence of U.S. citizens.

The U.S. government has a warrant for access to emails held by Microsoft of a person involved in an investigation, but the company holds that nowhere did the U.S. Congress say that the Electronics Communications Privacy Act "should reach private emails stored on providers’ computers in foreign countries."

It prefers that the government use "mutual legal assistance" treaties it has in place with other countries including Ireland. In an amicus curiae (friend of the court) brief filed in December in the U.S. Court of Appeals for the Second Circuit, Ireland said it “would be pleased to consider, as expeditiously as possible, a request under the treaty, should one be made.”

A number of technology companies, civil rights groups and computer scientists have filed briefs supporting Microsoft.

In a recent filing in the Second Circuit court, Microsoft said "Congress can and should grapple with the question whether, and when, law enforcement should be able to compel providers like Microsoft to help it seize customer emails stored in foreign countries."

"We hope the U.S. government will work with Congress and with other governments to reform the laws, rather than simply seek to reinterpret them, which risks happening in this case," Microsoft's general counsel Brad Smith wrote in a post in April.

Lower courts have disagreed with Microsoft's point of view. U.S. Magistrate Judge James C. Francis IV of the U.S. District Court for the Southern District of New York had in April last year refused to quash a warrant that authorized the search and seizure of information linked with a specific Web-based email account stored on Microsoft's premises.

Microsoft complied with the search warrant by providing non-content information held on its U.S. servers but filed to quash the warrant after it concluded that the account was hosted in Dublin and the content was also stored there.

If the territorial restrictions on conventional warrants applied to warrants issued under section 2703 (a) of the Stored Communications Act, a part of the ECPA, the burden on the government would be substantial, and law enforcement efforts would be seriously impeded, the magistrate judge wrote in his order. The act covers required disclosure of wire or electronic communications in electronic storage.

While the company held that courts in the U.S. are not authorized to issue warrants for extraterritorial search and seizure, Judge Francis held that a warrant under the Stored Communications Act, was "a hybrid: part search warrant and part subpoena." It is executed like a subpoena in that it is served on the Internet service provider who is required to provide the information from its servers wherever located, and does not involve government officials entering the premises, he noted.

Judge Loretta Preska of the District Court for the Southern District of New York rejected Microsoft's appeal of the ruling, and the company thereafter appealed to the Second Circuit.



Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Monday, 31 August 2015

10 security technologies destined for the dustbin

Systemic flaws and a rapidly shifting threatscape spell doom for many of today’s trusted security technologies

Perhaps nothing, not even the weather, changes as fast as computer technology. With that brisk pace of progress comes a grave responsibility: securing it.

Every wave of new tech, no matter how small or esoteric, brings with it new threats. The security community slaves to keep up and, all things considered, does a pretty good job against hackers, who shift technologies and methodologies rapidly, leaving last year’s well-recognized attacks to the dustbin.

Have you had to enable the write-protect notch on your floppy disk lately to prevent boot viruses or malicious overwriting? Have you had to turn off your modem to prevent hackers from dialing it at night? Have you had to unload your ansi.sys driver to prevent malicious text files from remapping your keyboard to make your next keystroke reformat your hard drive? Did you review your autoexec.bat and config.sys files to make sure no malicious entries were inserted to autostart malware?

Not so much these days -- hackers have moved on, and the technology made to prevent older hacks like these is no longer top of mind. Sometimes we defenders have done such a good job that the attackers decided to move on to more fruitful options. Sometimes a particular defensive feature gets removed because the good guys determined it didn't protect that well in the first place or had unexpected weaknesses.

If you, like me, have been in the computer security world long enough, you’ve seen a lot of security tech come and go. It’s almost to the point where you can start to predict what will stick and be improved and what will sooner or later become obsolete. The pace of change in attacks and technology alike mean that even so-called cutting-edge defenses, like biometric authentication and advanced firewalls, will eventually fail and go away. Surveying today's defense technologies, here's what I think is destined for the history books.

Biometric authentication is tantalizing cure-all for log-on security. After all, using your face, fingerprint, DNA, or some other biometric marker seems like the perfect log-on credential -- to someone who doesn't specialize in log-on authentication. As far as those experts are concerned, it’s not so much that biometric methods are rarely as accurate as most people think; it's more that, once stolen, your biometric markers can't be changed.

Take your fingerprints. Most people have only 10. Anytime your fingerprints are used as a biometric logon, those fingerprints -- or, more accurately, the digital representations of those fingerprints -- must be stored for future log-on comparison. Unfortunately, log-on credentials are far too often compromised or stolen. If the bad guy steals the digital representation of your fingerprints, how could any system tell the difference between your real fingerprints and their previously accepted digital representations?

In that case, the only solution might be to tell every system in the world that might rely on your fingerprints to not rely on your fingerprints, if that were even possible. The same is true for any other biometric marker. You'll have a hard time repudiating your real DNA, face, retina scan, and so on if a bad player gets their hands on the digital representation of those biometric markers.

That doesn’t even take into account issues around systems that only allow you to logon if you use, say, your fingerprint when you can no longer reliably use your fingerprint. What then?

Biometric markers used in conjunction with a secret only you know (password, PIN, and so on) are one way to defeat hackers that have your biometric logon marker. Of course mental secrets can be captured as well, as happens often with nonbiometric two-factor log-on credentials like smartcards and USB key fobs. In those instances, admins can easily issue you a new physical factor and you can pick a new PIN or password. That isn't the case when one of the factors is your body.

While biometric logons are fast becoming a trendy security feature, there's a reason they aren’t -- and won't ever be -- ubiquitous. Once people realize that biometric logons aren't what they pretend to be, they will lose popularity and either disappear, always require a second form of authentication, or only be used when high-assurance identification is not needed.

Doomed security technology No. 2: SSL

Secure Socket Layer was invented by long-gone Netscape in 1995. For two decades it served us adequately. But if you haven't heard, it is irrevocably broken and can't be repaired, thanks to the Poodle attack. SSL’s replacement, TLS (Transport Layer Security), is slightly better. Of all the doomed security tech discussed in this article, SSL is the closest to be being replaced, as it should no longer be used.

The problem? Hundreds of thousands of websites rely on or allow SSL. If you disable all SSL -- a common default in the latest versions of popular browsers -- all sorts of websites don't work. Or they will work, but only because the browser or application accepts "downleveling" to SSL. If it's not websites and browsers, then it's the millions of old SSH servers out there.

OpenSSH is seemingly constantly being hacked these days. While it’s true that about half of OpenSSH hacks have nothing to do with SSL, SSL vulnerabilities account for the other half. Millions of SSH/OpenSSH sites still use SSL even though they shouldn't.

Worse, terminology among tech pros is contributing to the problem, as nearly everyone in the computer security industry calls TLS digital certificates "SSL certs" though they don't use SSL. It's like calling a copy machine a Xerox when it's not that brand. If we’re going to hasten the world off SSL, we need to start calling TLS certs "TLS certs.

Make a vow today: Don't use SSL ever, and call Web server certs TLS certs. That's what they are or should be. The sooner we get rid of the word "SSL," the sooner it will be relegated to history's dustbin.

Doomed security technology No. 3: Public key encryption

This may surprise some people, but most of the public key encryption we use today -- RSA, Diffie-Hellman, and so on -- is predicted to be readable as soon as quantum computing and cryptography are figured out. Many, including this author, have been long (and incorrectly) predicting that usable quantum computing was mere years away. But when researchers finally get it working, most known public encryption ciphers, including the popular ones, will be readily broken. Spy agencies around the world have been saving encrypted secrets for years waiting for the big breakthrough -- or, if you believe some rumors, they already have solved the problem and are reading all our secrets.

Some crypto experts, like Bruce Schneier, have long been dubious about the promise of quantum cryptography. But even the critics can't dismiss the likelihood that, once it's figured out, any secret encrypted by RSA, Diffie-Hellman, and even ECC are immediately readable.

That's not to say there aren't quantum-resistant cipher algorithms. There are a few, including lattice-based cryptography and Supersingular Isogeny Key Exchange. But if your public cipher isn't one of those, you're out of luck if and when quantum computing becomes widespread.

Doomed security technology No. 4: IPsec
When enabled, IPsec allows all network traffic between two or more points to be cryptographically protected for packet integrity and privacy, aka encrypted. Invented in 1993 and made an open standard in 1995, IPsec is widely supported by hundreds of vendors and used on millions of enterprise computers.

Unlike most of the doomed security defenses discussed in this article, IPsec works and works great. But its problems are two-fold.

First, although widely used and deployed, it has never reached the critical mass necessary to keep it in use for much longer. Plus, IPsec is complex and isn't supported by all vendors. Worse, it can often be defeated by only one device in between the source and destination that does not support it -- such as a gateway or load balancer. At many companies, the number of computers that get IPsec exceptions is greater than the number of computers forced to use it.

IPsec's complexity also creates performance issues. When enabled, it can significantly slow down every connection using it, unless you deploy specialized IPsec-enabled hardware on both sides of the tunnel. Thus, high-volume transaction servers such as databases and most Web servers simply can’t afford to employ it. And those two types of servers are precisely where most important data resides. If you can't protect most data, what good is it?

Plus, despite being a "common" open standard, IPsec implementations don't typically work between vendors, another factor that has slowed down or prevented widespread adoption of IPsec.

But the death knell for IPsec is the ubiquity of HTTPS. When you have HTTPS enabled, you don't need IPsec. It's an either/or decision, and the world has spoken. HTTPS has won. As long as you have a valid TLS digital certificate and a compatible client, it works: no interoperability problems, low complexity. There is some performance impact, but it’s not noticeable to most users. The world is quickly becoming a default world of HTTPS. As that progresses, IPsec dies.

Doomed security technology No. 5: Firewalls

The ubiquity of HTTPS essentially spells the doom of the traditional firewall. I wrote about this in 2012, creating a mini-firestorm that won me invites to speak at conferences all over the world.

Some people would say I was wrong. Three years later, firewalls are still everywhere. True, but most aren't configured and almost all don't have the "least permissive, block-by-default" rules that make a firewall valuable in the first place. Most firewalls I come across have overly permissive rules. I often see "Allow All ANY ANY" rules, which essentially means the firewall is worse than useless. It's doing nothing but slowing down network connections.

Anyway you define a firewall, it must include some portion that allows only specific, predefined ports in order to be useful. As the world moves to HTTPS-only network connections, all firewalls will eventually have only a few rules -- HTTP/HTTPS and maybe DNS. Other protocols, such ads DNS, DHCP, and so on, will likely start using HTTPS-only too. In fact, I can't imagine a future that doesn't end up HTTPS-only. When that happens, what of the firewall?

The main protection firewalls offer is to secure against a remote attack on a vulnerable service. Remotely vulnerable services, usually exploited by one-touch, remotely exploitable buffer overflows, used to be among the most common attacks. Look at the Robert Morris Internet worm, Code Red, Blaster, and SQL Slammer. But when's the last time you heard of a global, fast-acting buffer overflow worm? Probably not since the early 2000s, and none of those were as bad as the worms from the 1980s and 1990s. Essentially, if you don't have an unpatched, vulnerable listening service, then you don't need a traditional firewall -- and right now you don't. Yep, you heard me right. You don't need a firewall.

Firewall vendors often write to tell me that their "advanced" firewall has features beyond the traditional firewall that makes theirs worth buying. Well, I've been waiting for more than two decades for "advanced firewalls" to save the day. It turns out they don't. If they perform "deep packet inspection" or signature scanning, it either slows down network traffic too much, is rife with false positives, or scans for only a small subset of attacks. Most "advanced" firewalls scan for a few dozen to a few hundred attacks. These days, more than 390,000 new malware programs are registered every day, not including all the hacker attacks that are indistinguishable from legitimate activity.

Even when firewalls do a perfect job at preventing what they say they prevent, they don't really work, given that they don't stop the two biggest malicious attacks most organizations face on a daily basis: unpatched software and social engineering.

Put it this way: Every customer and person I know currently running a firewall is as hacked as someone who doesn't. I don't fault firewalls. Perhaps they worked so well back in the day that hackers moved on to other sorts of attacks. For whatever reason, firewalls are nearly useless today and have been trending in that direction for more than a decade.

Doomed security technology No. 6: Antivirus scanners

Depending on whose statistics you believe, malware programs currently number in the tens to hundreds of millions -- an overwhelming fact that has rendered antivirus scanners nearly useless.

Not entirely useless, because they stop 80 to 99.9 percent of attacks against the average user. But the average user is exposed to hundreds of malicious programs every year; even with the best odds, the bad guy wins every once in a while. If you keep your PC free from malware for more than a year, you've done something special.

That isn’t to say we shouldn’t applaud antivirus vendors. They've done a tremendous job against astronomical odds. I can't think of any sector that has had to adjust to the kinds of overwhelming progressive numbers and advances in technology since the late 1980s, when there were only a few dozen viruses to detect.

But what will really kill antivirus scanners isn't this glut of malware. It's whitelisting. Right now the average computer will run any program you install. That's why malware is everywhere. But computer and operating system manufacturers are beginning to reset the "run anything" paradigm for the safety of their customers -- a movement that is antithetical to antivirus programs, which allow everything to run unimpeded except for programs that contain one of the more than 500 million known antivirus signatures. “Run by default, block by exception” is giving way to “block by default, allow by exception.”

Of course, computers have long had whitelisting programs, aka application control programs. I reviewed some of the more popular products back in 2009. The problem: Most people don't use whitelisting, even when it’s built in. The biggest roadblock? The fear of what users will do if they can't install everything they want willy-nilly or the big management headache of having to approve every program that can be run on a user’s system.

But malware and hackers are getting more pervasive and worse, and vendors are responding by enabling whitelisting by default. Apple's OS X introduced a near version of default whitelisting three years ago with Gatekeeper. iOS devices have had near-whitelisting for much longer in that they can run only approved applications from the App Store (unless the device is jailbroken). Some malicious programs have slipped by Apple, but the process has been incredibly successful at stopping the huge influx that normally follows popular OSes and programs.

Microsoft has long had a similar mechanism, through Software Restriction Policies and AppLocker, but an even stronger push is coming in Windows 10 with DeviceGuard. Microsoft’s Windows Store also offers the same protections as Apple's App Store. While Microsoft won't be enabling DeviceGuard or Windows Store-only applications by default, the features are there and are easier to use than before.

Once whitelisting becomes the default on most popular operating systems, it's game over for malware and, subsequently, for antivirus scanners. I can't say I'll miss either.

Doomed security technology No. 7: Antispam filters

Spam still makes up more than half of the Internet's email. You might not notice this anymore, thanks to antispam filters, which have reached levels of accuracy that antivirus vendors can only claim to deliver. Yet spammers keep spitting out billions of unwanted messages each day. In the end, only two things will ever stop them: universal, pervasive, high-assurance authentication and more cohesive international laws.

Spammers still exist mainly because we can't easily catch them. But as the Internet matures, pervasive anonymity will be replaced by pervasive high-assurance identities. At that point, when someone sends you a message claiming to have a bag of money to mail you, you will be assured they are who they say they are.

High-assurance identities can only be established when all users are required to adopt two-factor (or higher) authentication to verify their identity, followed by identity-assured computers and networks. Every cog in between the sender and the receiver will have a higher level of reliability. Part of that reliability will be provided by pervasive HTTPS (discussed above), but it will ultimately require additional mechanisms at every stage of authentication to assure that when I say I'm someone, I really am that someone.

Today, almost anyone can claim to be anyone else, and there's no universal way to verify that person's claim. This will change. Almost every other critical infrastructure we rely on -- transportation, power, and so on -- requires this assurance. The Internet may be the Wild West right now, but the increasingly essential nature of the Internet as infrastructure virtually ensures that it will eventually move in the direction of identity assurance.

Meanwhile, the international border problem that permeates nearly every online-criminal prosecution is likely to be resolved in the near future. Right now, many major countries do not accept evidence or warrants issued by other countries, which makes arresting spammers (and other malicious actors) nearly impossible. You can collect all the evidence you like, but if the attacker’s home country won't enforce the warrant, your case is toast.

As the Internet matures, however, countries that don't help ferret out the Internet's biggest criminals will be penalized. They may be placed on a blacklist. In fact, some already are. For example, many companies and websites reject all traffic originating from China, whether it's legitimate or not. Once we can identify criminals and their home countries beyond repudiation, as outlined above, those home countries will be forced to respond or suffer penalties.

The heyday of the spammers where most of their crap reached your inbox is already over. Pervasive identities and international law changes will close the coffin lid on spam -- and the security tech necessary to combat it.

Doomed security technology No. 8: Anti-DoS protections

Thankfully, the same pervasive identity protections mentioned above will be the death knell for denial-of-service (DoS) attacks and the technologies that have arisen to quell them.

These days, anyone can launch free Internet tools to overwhelm websites with billions of packets. Most operating systems have built-in anti-DoS attack protections, and more than a dozen vendors can protect your websites even when being hit by extraordinary amounts of bogus traffic. But the loss of pervasive anonymity will stop all malicious senders of DoS traffic. Once we can identify them, we can arrest them.

Think of it this way: Back in the 1920s there were a lot of rich and famous bank robbers. Banks finally beefed up their protection, and cops got better at identifying and arresting them. Robbers still hit banks, but they rarely get rich, and they almost always get caught, especially when they persist in robbing more banks. The same will happen to DoS senders. As soon as we can quickly identify them, the sooner they will disappear as the bothersome elements of society that they are.

Doomed security technology No. 9: Huge event logs

Computer security event monitoring and alerting is difficult. Every computer is easily capable of generating tens of thousands of events on its own each day. Collect them to a centralized logging database and pretty soon you're talking petabytes of needed storage. Today's event log management systems are often lauded for the vast size of their disk storage arrays.

The only problem: This sort of event logging doesn't work. When nearly every collected event packet is worthless and goes unread, and the cumulative effect of all the worthless unread events is a huge storage cost, something has to give. Soon enough admins will require application and operating system vendors to give them more signal and less noise, by passing along useful events without the mundane log clutter. In other words, event log vendors will soon be bragging about how little space they take rather than how much.

Doomed security technology No. 10: Anonymity tools (not to mention anonymity and privacy)

Lastly, any mistaken vestige of anonymity and privacy will be completely wiped away. We already really don't have it. The best book I can recommend on the subject is Bruce Schneier's "Data and Goliath." A quick read will scare you to death if you didn't already realize how little privacy and anonymity you truly have.

Even hackers who think that hiding on Tor and other "darknets" give them some semblance of anonymity must understand how quickly the cops are arresting people doing bad things on those networks. Anonymous kingpin after anonymous kingpin ends up being arrested, identified in court, and serving real jail sentences with real jail numbers attached to their real identity.

The truth is, anonymity tools don't work. Many companies, and certainly law enforcement, already know who you are. The only difference is that, in the future, everyone will know the score and stop pretending they are staying hidden and anonymous online.

I would love for a consumer's bill of rights guaranteeing privacy to be created and passed, but past experience teaches me that too many citizens are more than willing to give up their right to privacy in return for supposed protection. How do I know? Because it's already the standard everywhere but the Internet. You can bet the Internet is next.

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com