Friday, 14 March 2014

Can anti-virus technology morph into breach detection systems?

Can anti-virus technology morph into breach detection systems?
Such breach detection systems would need a centralized management reporting system and cloud-based analysis of gathered threat data.

Anti-virus software is still often considered a "checkbox" item for enterprise deployments, especially on Microsoft Windows, but over the decades, anti-virus software changed to do far more than just signature-based virus blocking. Today, the question is whether the type of anti-malware product that evolved from virus checking can transform again to be a part of a "breach detection system," or BDS

“The premise of breach detection is things will get through all your defenses and you need to contain it as soon as possible,” says Randy Abrams, research director at NSS Labs, which has begun testing what it calls BDS products that can identify evidence of stealthy cyberattacks, track down what corporate computers and networks were hit and quickly mitigate against any malware dropped in that attack which would be used to spy and exfiltrate sensitive data. BDS products, however they do it — through sandboxing, an endpoint agent or other approach -- should be able to at least catch the breach within 48 hours, he says.
The premise of threat detection is things will get through your defenses and you need to contain it as soon as possible.
— Randy Abrams, research director at NSS Labs

BDS products are largely immature, Abrams acknowledges, but enterprise customers are keenly interested in them and asking to have them independently tested. NSS Labs started doing that last year with products from AhnLab as well as FireEye and Fidelis Security, which was acquired by General Dynamics. These three did fairly well in that first round of basic testing, Abrams says. But the main limitations appeared to be there needs to be more protocol analysis done in to ensure attackers don’t have “a hidden tunnel out of the enterprise,” he adds. The next round of BDS tests anticipated for later this year will be tougher, he says.

+More on Network World: McAfee plans enterprise security package for fast threat detection and response | Is rapid detection the new prevention? | IDC tabs ‘Specialized Threat Analysis & Protection’ as new segment +

The vendors that NSS Labs consider to be part of the emerging BDS market today include Cisco, FireEye, Symantec, McAfee, Palo Alto Networks, Damballa, Fidelis and AhnLab. The security industry itself is abuzz with the utterances of “indicators of compromise,” the “IOC” clues such as anomalous outbound traffic that might indicate an attacker successfully broke in. Abrams thinks any BDS will need a centralized management reporting system and probably a lot of cloud-based analysis of gathered threat data.

Where this will all go is uncertain. The term BDS isn’t universally applied as a description. One research firm, IDC, last year started tracking what it calls “Specialized Threat Analysis and Protection” as a new segment that seems similar to BDS.

The question is whether the established vendors in the traditional antivirus industry, particularly Symantec and McAfee which lead in market share, can transition over to anything close to the NSS Labs’ view of BDS. Abrams notes the problem with any anti-malware product, however good, is that criminals determined to break into corporations are testing the attack and espionage code they’ve developed for that against existing antivirus products to find something that will get through and not be noticed, at least for a while.

Best CCNA Training and CCNA Certification and more Cisco exams log in to Certkingdom.com


Monday, 10 March 2014

The greatest security story never told -- how Microsoft's SDL saved Windows

'We actually had to bus in engineers.'

Microsoft has launched a new website to "tell the untold story" of something it believes changed the history of Windows security and indeed Microsoft itself - the Software Development Lifecycle or plain 'SDL' for short.

For those who have never heard of the SDL, or don't have the remotest idea why it might be important, the new site offers some refreshingly candid insights to change their minds.

Without buying into the hype, the SDL can still fairly be described as the single initiative that saved Redmond's bacon at a moment of huge uncertainty in 2002 and 2003. Featuring video interviews with some of its instigators and protagonists, the new site offers outsiders a summary of how and why Microsoft decided to stop being a software firm and become a software and security firm in order to battle the malware that was suddenly smashing into its software.

Few outside the firm knew of the crisis unfolding inside its campus but not everyone was surprised. Microsoft now traces the moment the penny dropped to the early hours of a summer morning in 2001, only weeks before it was due to launch Windows XP to OEMs.

"It was 2 a.m. on Saturday, July 13, 2001, when Microsoft's then head of security response, Steve Lipner, awoke to a call from cybersecurity specialist Russ Cooper. Lipner was told a nasty piece of malware called "Code Red" was spreading at an astonishing rate. Code Red was a worm a malicious computer program that spreads quickly by copying itself to other computers across the Internet. And it was vicious."

Others arrived in the following two years; the Blaster worm, Nimda, Code Red II, MyDoom, Sasser, and on and on. To a world and a Microsoft not used to the notion of malware being a regular occurrence, this was all a big shock.

By January 2002, with attacks on its baby XP humbling the biggest software firm on earth, Bill Gates sent his famous Trustworthy Computing (TwC) memo to everyone at Microsoft. From now on, security was going to be at the root of everything and so help us God.

That turned into the SDL, and it was given priority one to the extent that it took over the whole 8,500-person Windows development team for much of that year and the next. Its ambition was to completely change the way Microsoft made software so that as few programming errors were made that had to be fixed once customers were involved; "security could not continue to be a retroactive exercise."

Users had also started complaining. Loudly.

"I remember at one point our local telephone network struggled to keep up with the volume of calls we were getting. We actually had to bus in engineers," the site quotes its security VP Matt Thomlinson as saying.

The fruit of the SDL was XP's first Service Pack in 2002, followed up by the even more fundamental security overhaul of SP2 in 2004. By then, XP had been equipped with a software firewall, an almost unthinkable feature for an OS three years eariler.

It's arguable that despite the undoubted gains of the SDL since then, that the firm has yet to fully recover from the trauma of the period. Windows development has seemed less and less certain ever since, following up XP with the flawed Vista and more recent Windows 8 near-debacle. Microsoft still does operating systems but it's not clear that all its users do.

Still, the SDL programme has proved hugely influential even if it's not well known outside tech circles. It is now baked into everything. It has also influenced many other software houses and many have versions of the SDL of their own, many modelled on Microsoft's published framework on how to run secure development.

Whatever mis-steps Microsoft has made in the last decade, security has turned into a bit of a success story right down to the firm's pioneering and hugely important Digital Crimes Unit (DCU) that conducts the forensics necessary to track down the people who write malware in their caves. Both the SDL and DCU are seen as world leaders.

So let's hear of for Redmond, the software giant that launched an operating system years behind the criminals but somehow clawed itself back from disaster. Most other firms would have wilted but somehow Gates's memo rallied the cubicle army.


Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Saturday, 1 March 2014

Twitter Suffering From Growing Pains (and Facebook Comparisons)

Twitter faces growing pressure to attract new users and dramatically increase engagement on the platform. Can it ever rival the numbers and growth of Facebook?

Twitter's honeymoon as a publicly traded company could be coming to an end. With growth stalling and timeline views on the decline for the first time ever, Twitter finds itself at a crossroads.

Twitter Suffers from Growing Pains
While its quest for more ad revenue continues unabated, the company faces even greater pressure to attract new users and dramatically increase engagement on the platform.

"Twitter has seen its sequential MAU growth rate decelerate sharply after hitting 50 million, raising concerns that its quirkier nature might cap its potential audience in the U.S. at a ceiling well below that of Facebook."
-- Seth Shafer, SNL Kagan

"We as a company aren't going to be satisfied -- I am not going to be satisfied -- until we reach every connected person on the planet, period," CEO Dick Costolo said at last week's Goldman Sachs Technology and Internet Conference.

The challenge ahead for Twitter coupled with Costolo's grandiose goal puts the company in a predicament unlike any it has confronted before. It also fans the unfortunate, yet inevitable comparison to Facebook. While Twitter ended 2013 with an average monthly active user (MAU) base of 241 million, Facebook surpassed 1.23 billion. For every user that engages on Twitter, at least five are actively using Facebook.

"Twitter needs to do something to grow to the size of Facebook but the jury is still out if there's a clear path for Twitter or any other company to do that, or if Facebook is a once-in-a-lifetime anomaly that was in the right place at the right time," says Seth Shafer, associate analyst at SNL Kagan.

Costolo hasn't helped matters by failing to meet previous internal estimates for growth either. Early last year the executive reportedly told employees that he expected to reach 400 million MAUs by the end of 2013. Failing to double its active user base last year, Twitter reported a 30 percent increase in its stead.

"Twitter's overall MAU growth is still pretty healthy, but it's all coming internationally where users monetize at a much lower rate. U.S. growth has slowed significantly at about 50 million MAUs," says Shafer.

Facebook blew past 50 million U.S. MAUs without blinking and moreover, its sequential increases didn't dip into the single digits until it surpassed about 120 million users in the U.S., according to SNL Kagan data.

"Twitter, however, has seen its own sequential MAU growth rate decelerate sharply after hitting 50 million MAUs, raising concerns that its quirkier nature and niche focus might cap its potential audience in the United States at a ceiling well below that of Facebook," Shafer adds.

Twitter's 'Road Map' for Growth
Nonetheless, Twitter's lead executive says he is optimistic about rising user growth. While the company is being careful not to make specific promises or announcements about how it will improve on these points, Costolo has frequently referenced a road map of late that lays out a strategy for achieving better growth over the course of the year.

Pointing to field research and internal data on how users engage with the platform, he hints at a series of new features and design changes that are expected to drive new user growth. Twitter's vault of data and newfound capability to experiment with multiple beta tests simultaneously has "informed a very specific road map for the kinds of capabilities we want to introduce to the product that we believe will drive user growth," says Costolo.

He is quick to point out, however, that no single product feature or change to the platform will lead to a "quantum leap change in growth." Instead it will be an accumulation of numerous tweaks throughout the year that give him confidence. "You're going to be seeing a significant amount of experimentation of different ideas we have," he says.

While dispelling concerns about lagging growth in the recently closed quarter, Costolo says there was no specific event or trend during the quarter that meaningfully impacts how the company thinks about user growth. Indeed, improvements made during the finals months of 2013, particularly in messaging and discovery, have already paid off. Favorites and retweets rose 35 percent from the previous quarter and direct messages jumped 25 percent over the same period, according to Twitter.

"I'm starting to see those interactions do what we hoped they would do," he says. "It's more about pushing the content forward and pushing back the scaffolding of Twitter."

The company also hopes to attract new users by simplifying its on-boarding process and dramatically reducing the 11 steps a new account currently requires.

Under the Shadow of Facebook
Twitter has successfully maneuvered through its fair share of challenges before. Be it the fail whale sightings and power struggles of its early days or the feverish hunt for ad revenue of late, the company has found its way.

But now with its first complete quarter as a public company in the rear view, the demands for growth from investors will only get louder with each passing quarter. Twitter will have to deliver some big numbers in 2014 to keep Wall Street happy, but Costolo's comments also suggest that much of that success will depend on a clear differentiation between Twitter's role in the world and that of Facebook.

"Twitter is this indispensable companion to life in the moment," Costolo says. "If you think about it as a product, I think that misses the impact and the reach of what we really believe is a content, communications and data platform."

By that distinction, the opportunities afforded to Twitter are "enormous," says Costolo. "We believe we are the only platform where you get an understanding of wide reach in the moment while it's happening."

Tapping into big data and personalization could help, but it won't move the needle far enough for Twitter to reach the scale of Facebook, says Shafer of SNL Kagan.

Emerging from under the shadow of Facebook will be a struggle for Twitter unless it makes dramatic changes to the service or goes on an acquisition binge aimed at cobbling something larger together, he adds. And even that would be a challenge because of course, "we already have a pretty big thing like that called Facebook," Shafer says.

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Wednesday, 19 February 2014

Intel lays down case for software focus

Hardware needs software more than ever, CEO Brian Krzanich said
For decades, Intel chips would be unboxed and put straight into computers. But the chip maker is now trying to tie software closer to hardware before it starts producing chips, said CEO Brian Krzanich on Wednesday.

"For companies like Intel that are for the most part hardware companies, we tend to use software as a driver for hardware, and we tend to think of software as helping drive [the] need for hardware," Krzanich said in a chat session on Reddit.

In driving his point home, Krzanich invoked former Intel CEO Andy Grove. Grove said that software and hardware were complementary, and "drove each other," Krzanich wrote.

Intel's software focus has grown in recent years, which became evident when the company appointed Renee James as the company's president in May, a promotion from her previous post as executive vice president and general manager of the software and services group. James and Krzanich work as a team to make decisions for the company.

In recent years Intel has also made many software acquisitions, including McAfee. Intel intends to push the acquired software into mobile devices and PCs.

Intel is tuning McAfee -- now renamed Intel Security -- software to take advantage of security features on its chips. Intel also acquired software companies like Wind River, whose real-time operating system is considered key for its supercomputing and low-power "Internet of things" chips to securely collect and quickly process data.

Intel has also released its own Hadoop distribution designed to work best with its server chips.

"We now spend a huge amount of time upfront thinking about the experiences we want a user to have before we put one transistor on the chip," Krzanich said.

That is how Intel has changed its approach in chip development -- first defining what a product is going to be used for, developing the software and tools around it and then tuning the chip to meet that user experience, Krzanich said.

"It's about experience, and without a great user experience from the un-boxing onwards...you don't have a product," Krzanich said.

Krzanich also touched upon the company's relationship with Apple.

"We've always had a very close relationship with Apple and it continues to grow closer," Krzanich wrote. "We're always trying to build the relationship with all of our customers to be closer."

Apple uses Intel's chips in PCs, but uses its own ARM-based processors in the iPhone, iPad and other devices. After sticking to making mainly x86 chips in its factories for decades, Intel opened up to making ARM-based processors earlier this year, and will be making 64-bit ARM chips for Altera, which makes FPGAs (field-programmable gate arrays).

Analyst firm IC Insight last week sent a note saying that Intel should cut a deal with Apple to make 64-bit chips on the 14-nanometer process, which is considered the industry's most advanced manufacturing technology.

Krzanich also said he hopes 40 million tablets with Intel chips will ship this year.

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com


Monday, 17 February 2014

So Long IT Specialist, Hello Full-Stack Engineer

At GE Capital, the business is focused not simply on providing financial services to mid-market companies but also selling the company's industrial expertise. They might help franchisees figure how to reduce power consumption or aid aircraft companies with their operational problems. "It's our big differentiator," says GE Capital CTO Eric Reed. "It makes us sticky."

And within IT, Reed is looking not for the best Java programmer in the world or an ace C# developer. He wants IT professionals who know about the network and DevOps, business logic and user experience, coding and APIs.

IT Specialist Out, Full-Stack Engineer In
It's a shift for the IT group prompted by an exponential increase in the pace of business and technology change. "The market is changing so much faster than it was just two or five or, certainly, 10 years ago," says Reed. "That changes the way we think about delivering solutions to the business and how we invest in the near- and long-term. We have to think about how we move quickly. How we try things and iterate fast."

But agility is a tall order when supporting a $44.1 billion company with more than 60,000 employees in 65 countries around the world. "There are several markets we play in, and we can't be big and slow," says Reed. "But the question is how to we make ourselves agile as a company our size."

Like many traditional IT organizations, GE Captial had one group that developed and managed applications and another that designed and managed infrastructure. Over time, both groups had done a great deal of outsourcing. It wasn't an organizational structure designed for speed.

An engineer by training, Reed saw an opportunity to apply the new product introduction (NPI) process developed at GE a couple of decades ago to the world of IT development. Years ago, a GE engineer might split his or her time between supporting a plant, providing customer service, and developing a new product. With NPI, we turned that on its ear and said you're going to focus only on this new product," explains Reed. "You take people with different areas of expertise and you give them one focus."

That's what Reed did with IT. "We take folks that might do five different things in the course of the day and focus them on one task -- with the added twist being that you can't be someone who just writes code," says Reed.

A New Type of IT Team Forms
Last year, Reed pulled together the first such team to develop a mobile fleet management system for GE Capital's Nordic region. He assembled a diverse group of 20, who had previously specialized in networking, computing, storage, application, or middleware, to work together virtually. He convinced all of the company's CIOs to share their employees. They remained in their initial locations with their existing reporting relationships, but for six months all of their other duties were stripped away. "The CIOs had to get their heads around that," Reed says

The team was given some quick training in automation and given three tasks: develop the application quickly, figure out how to automate the infrastructure, and figure out how to automate more of the application deployment and testing in order to marry DevOps with continuous application delivery.

There were no rules -- or roles. "We threw them together and said, 'You figure it out,'" Reed recalls. "We found some people knew a lot more than their roles indicated, and the lines began blurring between responsibilities." Some folks were strong in certain areas and shared their expertise with others. Traditional infrastructure professionals had some middleware and coding understanding. "They didn't have to be experts in everything, but they had a working knowledge," Reed says.

The biggest challenge was learning to be comfortable with making mistakes. "GE has built a reputation around execution," says Reed. "My boss [global CIO of GE Capital] and I had to figure out how to foster an environment were people take risks even though it might not work out."

Project Success
The project not only proceeded quickly -- the application was delivered within several months -- it established some new IT processes. They increased the amount of automation possible not only at the infrastructure level, but within the application layer at well. They also aimed for 60 to 70 percent reusability in developing the application, creating "lego-like" building blocks that can be recycled for future projects.

Business customers welcomed the new approach. In the past, "they would shoehorn as many requirements into the initial spec as possible because they didn't know when they'd ever have the chance again," says Reed. "Now it's a more agile process." The team launches a minimum viable solution and delivers new features over time.

For IT, "it was a radical change in thinking," says Reed. "We've operated the same way literally for decades. There were moments of sheer terror." And it wasn't for everyone. Some opted out of the project and went back to their day jobs.

But Reed is eager to apply the process to future projects and rethink the way some legacy systems are built and managed. "We had talked about services-oriented architecture, and now we have something tangible that shows it can be done," Reed says. "On the legacy side, we have to decide if we want to automate more of that infrastructure and keep application development the old way or invest in this."

Some employees remained with the fleet management app team. Others started a new project. And a few went back to their original roles. "We're trying to make disciples so more people can learn about this process," Reed says.

Reed can envision the IT organization changing eventually. "What we look for in people when we hire them will change. There were years when we went out in search of very technical people. Then there were years of outsourcing where we sought people who could manage vendors and projects," Reed says. "Now we need both, and we need to figure out how to keep them incentivized."

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com


Monday, 20 January 2014

Cisco fixes remote access vulnerabilities in Cisco Secure Access Control System

Flaws in the network access control product can give attackers access to administrative functions, Cisco said

Cisco Systems has released software updates for its Cisco Secure Access Control System (ACS) in order to patch three vulnerabilities that could give remote attackers administrative access to the platform and allow them to execute OS-level commands without authorization.

Cisco ACS is a server appliance that enforces access control policies for both wireless and wired network clients. It's managed through a Web-based user interface and supports the RADIUS (Remote Access Dial In User Service) and TACACS+ (Terminal Access Controller Access-Control System Plus) protocols.

+ Also on NetworkWorld: Best of CES 2014 -- in Pictures +

Versions of the Cisco Secure ACS software older than 5.5 contain two vulnerabilities in the RMI (Remote Method Invocation) interface that's used for communication between different ACS deployments and listens on TCP ports 2020 and 2030.

One of the vulnerabilities, identified as CVE-2014-0648, stems from insufficient authentication and authorization enforcement and allows remote unauthenticated attackers to perform administrative actions on the system through the RMI interface.

The other vulnerability, identified as CVE-2014-0649, allows remote attackers with access to restricted user accounts to escalate their privileges and perform superadmin functions via the RMI interface.

A third vulnerability, tracked as CVE-2014-0650, was discovered in the system's Web-based interface and is the result of insufficient input validation. An unauthenticated remote attacker can exploit this vulnerability to inject and execute OS-level commands without shell access, Cisco said in a security advisory. This vulnerability affects Cisco Secure ACS software older than 5.4 patch 3.

There are no configuration workarounds available to mitigate these vulnerabilities, so updating the software to the new versions released by Cisco is recommended.

"The Cisco Product Security Incident Response Team (PSIRT) is not aware of any public announcements or malicious use of the vulnerabilities that are described in this advisory," the company said.

Best CCNA Training and CCNA Certification and more Cisco exams log in to Certkingdom.com


Monday, 13 January 2014

70-337 Enterprise Voice & Online Services with Microsoft Lync Server 2013



QUESTION 1
Subsequent to configuring synchronization for Active Directory and Microsoft Office 365, you want
to make sure that the phone numbers of the preliminary Exchange Online users are suitably supervised.
Which of the following actions should you take?

A. You should consider making use of the MOSDAL Support Toolkit.
B. You should consider making use of the Office 365 Lync Online TRIPP tool.
C. You should consider making use of Active Directory Users and Computers.
D. You should consider making use of the Microsoft Online Services Directory Synchronization tool.

Answer: C

Explanation:


QUESTION 2
You are preparing to install and configure two Mediation Servers is the Dallas office to satisfy the
Enterprise Voice prerequisites.
You are preparing to configure the necessary ports.
Which of the following is TRUE with regards to the port configuration?

A. At least one port must be configured for each Mediation Server to meet the Enterprise Voice prerequisites.
B. Only one port must be configured for each Mediation Server to meet the Enterprise Voice prerequisites.
C. At least three ports must be configured for each Mediation Server to meet the Enterprise Voice prerequisites.
D. Only one Mediation Server must be configured with three ports to meet the Enterprise Voice prerequisites.

Answer: C

Explanation:


QUESTION 3
You have been instructed to execute the cmdlet that satisfies the corporate prerequisites.
Which of the following actions should you take?

A. You should consider executing the Enable-CsUser cmdlet.
B. You should consider executing the Set-CsUser cmdlet.
C. You should consider executing the Convert-CsUserData cmdlet.
D. You should consider executing the Export-CsUserData cmdlet.

Answer: B

Explanation:


QUESTION 4
You have been tasked with satisfying the UM prerequisite with regards to extensions.
Which of the following actions should you take?

A. You should consider configuring a UM auto attendant.
B. You should consider configuring a UM mailbox policy.
C. You should consider configuring a hunt group.
D. You should consider configuring a UM Dial plan.

Answer: D

Explanation:


QUESTION 5
You want to satisfy the Lync Server prerequisites with regards to PSTN routing.
Which of the following actions should you take?

A. You should consider configuring a Director Server.
B. You should consider configuring two Lync Server 2013 trunks.
C. You should consider configuring a stand-alone Mediation Server pool.
D. You should consider configuring an Edge pool.

Answer: B

Explanation:


Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com