Monday, 25 November 2013

'Team Moscow' wins $100K in PayPal's Battle Hack 2013

PayPal's contest challenged teams from around the world to create best app to help community

A group of Russian software developers dubbed "Team Moscow" has won PayPal’s $100,000 Battle Hack 2013 awarded for the best socially worthy use of PayPal's API. A team of Israel finished second and one from Miami finished third.

Team Moscow produced a “Donate Now” application that leverages Bluetooth Low Energy technology to allow anyone to instantly donate to a cause right from their mobile device without filling in lengthy forms. Team Tel Aviv had an app that connects runners to encourage running, and Team Miami had LoanPal, a peer-to-peer lending service for “underbanked individuals.”

Background: PayPal’s Battle Hack competition wants cool social apps

Team Moscow members include Sergey Pronin, Alexander Balabna, Bayram Annakov, and Oksana Tretiakova, who are sharing the $100,000 prize. Pronin responded to questions via e-mail:

What kind of background do you have in application development?
I have bachelor’s degree in Software Engineering from National Research University Higher School of Economics (HSE). My teammates and I work for a Russian software company called Empatika, where I’m a senior developer working primarily on an app called App in the Air. I love programming and developing applications, and all of us enjoy participating in the internal hackathons our company hosts. For example, at the company’s last hackathon a few months ago we worked on an Arduino for the first time.

I use Objective-C and Python on daily basis, and Java and Web from time to time. I'm also currently in the second year of a Master's degree program in software engineering.

What does your winning PayPal application do?
The project consists of two main parts: the beacon, an Arduino-based BLE beacon with a Rainbowduino screen, and the client app. The idea is to help people to donate by simplifying the actual donation process and make donations more contextual. For our presentation we used "Food for Homeless" as an example, a bus that serves food for homeless people. Typically, the only way to donate is to fill in 23 fields in a form on their website, a problem that is even worse when you are on a smartphone. If you are on foot and see the bus you don't have much time to fill all the forms, our app would address this problem.

Any other comments about your experience in the PayPal contest are welcome.

It is not our first hackathon, but I can surely say that it was the best experience. PayPal has done a really great job — the environment and facilities can't be beat.



Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Wednesday, 20 November 2013

10 mistakes companies make after a data breach

10 mistakes companies make after a data breach
Michael Bruemmer, vice president of Experian Data Breach Resolution, outlines some the common mistakes his firm has seen as organizations deal with the aftermath of a breach during a presentation for The International Association of Privacy Professionals (IAPP) Privacy Academy.

How to weather the storm
The aftermath of a data breach, such as the one experienced last month by Adobe, can be chaotic if not dealt with properly. The result of such poor handling could see organizations facing a hit to reputation, or worse, financial and legal problems.

No external agencies secured
Sometimes a breach is too big to deal with in-house, and the type of breach may make that option an unwise one. So it's best to have external help available if needed. Incident Response teams, such as those offered by Verizon Business, Experian, Trustwave, or IBM (just to name a few), should at least be evaluated and considered when forming a business continuity / incident response plan.

"The process of selecting the right partner can take time as there are different levels of service and various solutions to consider...Not having a forensic expert or resolution agency already identified

No engagement with outside counsel
"Enlisting an outside attorney is highly recommended," Bruemmer said.

"No single federal law or regulation governs the security of all types of sensitive personal information. As a result, determining which federal law, regulation or guidance is applicable depends, in part, on the entity or sector that collected the information and the type of information collected and regulated."

So unless internal resources are knowledgeable with all current laws and legislations, then external legal counsel with expertise in data breaches is a wise investment.

No single decision maker
"While there are several parties within an organization that should be on a data breach response team, every team needs a leader," Bruemmer said.

There needs to be one person who will drive the response plan, and act as the single source of contact to all external parties. They'll also be in charge of controlling the internal reporting structure – in order to ensure that everyone from executives and individual response team members are kept updated.

Lack of clear communication
Related to the lack of a single decision maker, a lack of clear communication is also a problem. Miscommunication can be the key driver to mishandling a data breach, Bruemmer said, as it delays process and adds confusion.

"Once the incident response team is identified, identify clear delegation of authority, and then provide attorneys and [external parties] with one main contact."

No communications plan
Sticking to the communications theme, another issue organizations face is the lack of planning as it relates to the public, especially the media.

"Companies should have a well-documented and tested communications plan in the event of a breach, which includes draft statements and other materials to activate quickly. Failure to ingrate communications into overall planning typically means delayed responses to media and likely more critical coverage," Bruemmer explained.

Waiting for perfect information before acting
Dealing with the aftermath of a data breach often requires operating with incomplete or rapidly changing information, due to new information learned by internal or external security forensics teams.

"Companies need to begin the process of managing a breach once an intrusion is confirmed and start the process of managing the incident early. Waiting for perfect information could ultimately lead to condensed timeframes that make it difficult to meet all of the many notification and other requirements," Bruemmer said.

Micromanaging the Breach
"Breach resolution requires team support, and often companies fail when micromanaging occurs. Trust your outside counsel and breach resolution vendors, and hold them accountable to execute the incident response plan," Bruemmer said.

No remediation plans post incident
There should be plans in place that address how to engage with customers and other audiences once the breach is resolved, as well as the establishment of additional measures to prevent future incidents.

"If an organization makes additional investments in processes, people and technology to more effective secure the data, finding ways to share those efforts with stakeholders can help rebuild reputation and trust. Yet, many fail to take advantage of this longer-term need once the initial shock of the incident is over," Bruemmer said.

Not providing a remedy to consumers
Customers should be put at the center of decision making following a breach. This focus means providing some sort of remedy, including call centers where consumers can voice their concerns and credit monitoring if financial, health or other highly sensitive information is lost.

"Even in incidents that involve less sensitive information, companies should consider other actions or guidance that can be provided to consumers to protect themselves," Bruemmer said.

Failing to practice
"Above all, a plan needs to be practiced with the full team. An incident response plan is a living, breathing document that needs to be continually updated and revised. By conducting a tabletop exercise on a regular basis, teams can work out any hiccups before it's too late," Bruemmer said.

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

12 hot security start-ups to watch

These start-ups are focusing on security in cloud services and mobile devices

Going into 2014, a whirlwind of security start-ups are looking to have an impact on the enterprise world. Most of these new ventures are focused on securing data in the cloud and on mobile devices. Santa Clara, Calif.-based Illumio, for example, founded earlier this year, is only hinting about what it will be doing in cloud security. But already it's the darling of Silicon Valley investors, pulling in over $42 million from backer Andreessen Horowitz, General Catalyst, Formation 8 and others.

The cloud’s lure is easy to see. More businesses continue to adopt a wide range of cloud services -- whether software-as-service, infrastructure-as-a-service or platform-as-a-service. That means the enterprise IT department needs more visibility, monitoring and security controls for what employees are doing and evidence their data is safe. In addition, employees today increasingly use smartphones and tablets they personally own for work in “Bring Your Own Device” mode, leading to other management and security questions. When there are perceived security “gaps,” start-ups see opportunities, as the 12 firms we identify here do.

Security is increasingly delivered not as on premises software or hardware but at least partly if not wholly as a cloud-based service. Gartner is predicting security-as-a-service will grow from about $2.13 billion now to $3.17 billion in 2015.

Gartner: Cloud-based security as a service set to take off

With all of that in mind, here’s our slate of security start-ups worth watching in the near future:

Adallom is based in Menlo Park, Calif., but has its research and development roots in Israel, where its three co-founders, Assaf Rappaport, vice president of R&D Roy Reznik and CTO Ami Luttwak have backgrounds in the Israel cyber-defense forces. Adallom — a word which means “last line of defense” in Hebrew — is taking on the problem in monitoring user actions related to software-as-a-service (SaaS) usage. The firm’s proxy-based technology announced this month is offered to the enterprise either as a security service in the cloud or server-based software for on premises.

The goal is to provide real-time analysis and a clear audit trail and reporting related to SaaS-based application usage by the enterprise. The monitoring can allows options for automating or manually terminating sessions or blocking content download. Though not wholly similar, its closest competitors could be considered to be two other start-ups, SkyHigh Networks and Netskope. The venture has gotten $4.5 million in funding from Sequoia Capital.

AlephCloud hasn’t yet made its software and service called AlephCloud Content Canopy generally available, but its purpose is to provide controlled encryption and decryption of documents transmitted business-to-business via cloud-based file synchronization and sharing services such as Dropbox, SkyDrive and Amazon S3. The company was founded in 2011by CEO Jieming Zhu and CTO Roy D’Souza. Zhu says Content Canopy works by means of the “federated key management” process AlephCloud developed that can use existing enterprise public-key infrastructures used in identity management. For the end user, though, who is permitted to retrieve and decrypt the encrypted document via Dropbox or SkyDrive, it’s all transparent. AlephCloud says its “zero-knowledge” encryption process means the company never holds the private encryption key. AlephCloud will first be supporting PCs, Macs, and Apple iOS devices, and Android next year, and specific file-sharing services. Zhu says the underlying technology can be expanded further to other applications as well. AlephCloud has received $9.5 million in venture-capital funding, including $7.5 million from Handbag LLC and the remainder from angel investors.

BitSight Technologies has a simple proposition. It’s not uncommon for companies to want to try and evaluate the IT security of another business before entering into an e-commerce arrangement where networks may be interconnected in some way. BitSight, co-founded in 2011 by CTO Stephen Boyer and COO Nagarjuna Venna, has a security “rating” service to do this, though there are limits on how far it can go at this point. The BitSight approach, says vice president of marketing Sonali Shah, relies on an analysis of Internet traffic by BitSight sensors on the Internet to detect if the company’s IT assets, such as computers, server or network, have been commandeered by threats such as botnets or denial-of-service attacks. But she acknowledges there’s not yet a way for BitSight to determine what security issues might arise in a company’s use of cloud services. Cambridge, Mass.-based BitSight has received $24 million in venture-capital funding from investors that include Menlo Ventures, Globespan Capital Partners, Commonwealth Capital and Flybridge Capital partners.

Defense.net is focusing on stopping denial-of-service attacks aimed by attackers at both enterprises and cloud service providers. Founded by its CTO Barrett Lyon, who started another anti-distributed denial-of-service firm called Prolexic in 2003, Defense.net relies on a cloud service without the need for an appliance to mitigate against large-scale DDoS assaults. Many in the industry say DDoS attacks are growing worse in scale and number. For his part, Lyon says he thinks the average DDoS attack is probably 16 times larger and “significantly more sophisticated than it was a year earlier.” Defense.net has received $9.5 million in funding from Bessemer Venture Partners.

Illumio, founded by its CEO Andrew Rubin earlier this year, is still in stealth mode, maintaining a discrete silence about its intentions. But the little hints sprinkled across its website indicate the Santa Clara, Calif.-based company’s focus is likely to be tackling cloud-based security with an emphasis on virtualization. Illumio has brought in former VMware techies and execs. As for Rubin himself, he was formerly CEO at Cymtec Systems, a security firm providing the means for visibility, protection and control by the enterprise of Web content and mobile devices, plus a means for intrusion-detection analysis. Illumio has received more than $42 million in funding from Andreessen Horowitz, General Catalyst, Formation 8 and others.

Lacoon Mobile Security has come up with a sandboxing approach to detect zero-day malware targeting Android and Apple iOS devices by means of a small lightweight agent that examines mobile applications through behavior analysis and a process tied to the Lacoon cloud gateway. The start-up was founded by CEO Michael Shaulov, vice president of research and development Ohad Bobrov, and Emanuel Avner, the CFO. The company has its R&D arm in Israel and its headquarters in San Francisco. It’s backed by $8 million in venture-capital funding led by Index Ventures, plus $2.7 million in angel investing, including from Shlomo Kramer, CEO at Imperva.

Malcovery Security, based in Pittsburgh, was basically spun out in 2012 from research on phishing done at the University of Alabama in Birmingham, according to its CTO Greg Coticchia. Targeted phishing attacks can have disastrous outcomes when devices are targeted to infiltrate organizations and steal data. Coticchia says the Malcovery technologies offered to businesses include ways to identify phishing websites and a service that can detect phishing e-mail. The company’s founders include Gary Warner, director of research in cyber forensics at the University of Alabama, and the start-up has received about $3 million in funding from the university.

Netskope wants to help businesses monitor how their employees are using cloud-based applications and apply security controls to it, such as giving IT managers the ability to block data transfers or receive alerts. The Netskope service can apply security controls to about 3,000 different cloud-based applications, whether they be SaaS, PaaS or Iaas. The Netskope service is meant to let IT divisions get a grip on cloud usage and avoid the “shadow IT” issue of business people initiating cloud services without informing IT at all. The Los Altos, Calif.-based start-up was founded in 2012 by CEO Sanjay Beri along with chief architect Ravi Ithal, chief scientist Krishna Narayanaswami, and Lebin Chang, head of application engineering teams, all who bring tech industry experience ranging from Juniper to Palo Alto Networks to VMware. Netskope has amassed $21 million in venture funding from Social+Capital Partnership and Lightspeed Venture Partners.

PrivateCore is a crypto-based security play, focusing on making use of the central processing unit (CPU) as the trusted component to encrypt data in use. PrivateCore has come up with what it calls its vCage software that relies on the Intel Xeon Sandy Bridge CPU for secure processing through means of Intel Sandy Bridge-based servers in cloud environments, first off in IaaS. The challenge in processing encrypted data is “the problem with having to decrypt to do processing,” says Oded Horovitz, CEO of the Palo Alto, Calif.-based start-up he co-founded with Steve Weis, CTO, and Cal Waldspurger as adviser. The vCage approach, based on Intel CPU Sandy Bridge, makes use of the Intel Trusted Execution Technologies and Advanced Encryption Standard algorithm to perform the processing in RAM. This can be done with Intel Sandy Bridge because there’s now about 20MB of cache available, he points out, enough to get the job done. The data in question is only unencrypted in the CPU. This encryption approach is being tested now by IaaS providers and some enterprises, and PrivateCore expects to have its first product in general release early next year. The start-up has received $2.4 million in venture capital from Foundation Capital.

Skycure is all about mobile-device security, with its initial focus on Apple iOS iPhones and iPads. It recently introduced what’s described as an intrusion-detection and prevention package for mobile devices, which Skycure’s co-founder and CTO Yair Amit says relies on the Skycure cloud service for security purposes. He says the goal is to prevent and mitigate any impact from attackers exploiting configuration profiles on mobile devices. Skycure, based in Tel Aviv, Israel, was co-founded by CEO Adi Sharabani and the company has received about $3 million in venture-capital funding from Pitango Venture Capital and angel investors.

Synack was founded by two former National Security Agency (NSA) computer network operations analysts, CEO Jay Kaplan and CTO Mark Kuhr. According to them, the Menlo Park, Calif.-based start-up is bringing together security experts with expertise in finding zero-day bugs in software, particularly in websites and applications of Synack customers. “We pay researchers for vulnerabilities found,” explained Kaplan last August as Synack officially debuted. He says bug bounty rates typically run a minimum of $500 to several thousand for serious vulnerabilities in databases, for example. Synack says it has cultivated relationships with several bug hunters around the world, including at the NSA, who would be available to take on specific assignments. Synack has received $1.5 million in venture-capital funding from a combination of investors that include Kleiner Perkins Caufield & Byers, Greylock Partners, Wing Venture Partners, Allegis Capital and Derek Smith, CEO of start-up Shape Security.

Threat Stack, founded by CEO Dustin Webber with Jennifer Andre, wants to give enterprises a way to know if hackers are breaking into Linux-based servers that they may use in their cloud services. To monitor for hacker activity, the start-up’s Cloud Sight agent software for Linux needs to be installed on the Linux server under administrative control in the cloud environment, says Webber. “We look for the behavior of the hacker,” he points out, noting the enterprise will get an alert if a hacker break-in is underway and a measure of forensics about incidents can be obtained if needed. Cloud Sight could also be potentially used by cloud service providers as well but the initial focus is on monitoring for the enterprise, he says. Threat Stack, founded in Cambridge, Mass., in 2012, has obtained $1.2 million in funding from Atlas Venture and .406 Ventures. The start-up is yet another example of why there’s new energy directed toward finding ways to provide visibility, monitoring and security for businesses adopting cloud services.



Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Wednesday, 13 November 2013

Microsoft: Fuel-cell powered data centers cost less, improve reliability

Microsoft researchers say fuel cell-based data centers could go where no data centers have gone before

Data centers powered by fuel cells, not the public power grid, could cut both capital and operational costs, improve reliability, pollute less and take up less space, according to Microsoft researchers.

This technology could make data center expansion possible in regions where utility-supplied power is tapped out but natural gas is abundant, according to a paper posted by Microsoft Research. Also, since the reliability of gas supply is better than that of electrical power, these data centers would suffer less downtime.

The researchers say there are many variables that need to be taken into account in engineering these facilities, but overall they hold potential for greener data centers.

The researchers looked at distributing relatively small fuel cells – similar to those used on propane-powered buses - around data centers to power a rack or two of servers each and found several potential benefits. It eliminates the need for the wired electrical distribution system in a traditional data center. If a fuel cell were to fail it would affect a limited number of servers, which data center management software could handle. Since the power is DC, the AC to DC converters in the servers could be eliminated.

In that configuration the power supply would be nearby each rack so there would be no need for a data center-wide electricity distribution system with its attendant transformers, high-voltage switching gear and distribution cabling. Pipes to distribute natural gas and leak sensors cost less. The tradeoff in the amount of space the gear occupies means a 30% reduction in the required square footage for a data center as a whole, the researchers say.

The fuel cells emit 49% less carbon dioxide, 68% less carbon monoxide emissions and 91% less nitrogen oxide, than traditional power methods, the researchers say.

Fuel cells do require specialized equipment that is not needed in traditional data centers such as reformers that pull hydrogen from methane, batteries and startup systems and auxiliary circuits.

Design of a fuel cell powered data center would have to take into account the spikes in server usage that require instantaneous power supply increases. Fuel cells, which perform best under constant load, can lag seconds behind changes in demand. “Some of the spikes can be absorbed by the server power supply with its internal capacitors. But large changes like flash crowd and hardware failures must be handled by an external energy storage (batteries or super-caps) or load banks,” the paper says.

The researchers figured use of rack-level power cells at $3-$5 per Watt, and planned for a five-year replacement cycle for them. The entire system life was set at 10 years. They eliminated the cost of diesel generators and uninterruptible power supplies because the natural gas supply is so reliable. Distribution at the rack level eliminates the need for transformers, high voltage switching gear and distribution cabling. Pipes to distribute natural gas and leak sensors cost less. The tradeoff in the amount of space the gear occupies means a 30% reduction in the required square footage for a data center as a whole.

This issue could be addressed by installing server-sized batteries that could jump in with extra power when server hardware is starting up or shutting down, the times of greatest change in power draw. Fuel cells give off heat, so these data centers would need greater fan capacity to cool them.

The capital cost of a traditional data center is $313.43 per rack per month. A rack-level fuel cell data center is between $50.72 and $63.36 less than that, researchers say. Operating expenses per rack per month for a traditional data center are $223.51 vs $214.06 for one powered by polymer electrolyte membrane fuel cells. The savings would be greater with a different type of technology called a solid oxide fuel cell. Savings would also fluctuate depending on the price of electricity where the data center is located.

Reliability of natural gas distribution systems is better than that of the electrical grid, and that would on average cut annual downtime from 8 hours, 45 minutes to 2 hours, 6 minutes, the researchers say.

The researchers considered using large fuel cells to plug into a traditional data center design as a direct replacement for a utility-provided electric service, but they decided that the larger the fuel cell the greater the chance of failure. Plus the cost was high.

They also considered tiny fuel cells to power individual servers. A failure would affect just one server, and because the cell is integrated there is no DC transmission loss. However, lots of tiny cells may add up to a less efficient and less cost effective use of energy than the slightly larger ones needed for racks.

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com




Friday, 8 November 2013

What Google killing IE9 support means for software development

Google's announcement that it won't support Internet Explorer 9 is a sign of a broader move toward rapid iteration in software development.

Let's start with this: I am completely OK with this. Sure, it may be a bit of a drag for people running Windows Vista (as newer versions of Internet Explorer – 10 and 11 – require Windows 7 or 8) but, let's be honest - nobody expects any company to spend the money and man-hours supporting every web browser for all eternity. And Vista users still have the option of installing another web browser, such as Firefox or Chrome.

So, if this isn't all that big of a deal, why am I bringing it up?

Web browsers are, in essence, platforms for running software.
Internet Explorer 9 was released in 2011. It’s only two years old.

That means that we have reached the point where complete application platforms are being deprecated, and left unsupported, after having existed for only two years. And, while that does bode well for the rapid improvement of platforms, it comes with a pretty steep price.

The most obvious of which is that end users are put in the position of needing to upgrade their systems far more often. This costs a not-insignificant amount of time (especially in larger organizations) and money. It is, to put it simply, inconvenient.

This rapid iteration of new versions of these systems also takes a heavy toll on software development. More versions of more platforms means more complexity in development and testing. This leads to longer, and more costly, development cycles (and significantly higher support costs). The result? The software that runs on these systems is improved at a slower rate than would otherwise be possible, and in all likelihood they will be of lower quality.

These are some pretty major drawbacks to the current “Operating Systems and Web Browsers are updated every time the wind changes direction” situation. But is it really all that bad? The alternative, for Windows users, isn't terribly attractive. Nobody wanted to be stuck with IE 6 for a second longer than was absolutely necessary.

I don't have a solution to any of this, mind you. Not a good one, at any rate – maybe we should make a gentleman's agreement to not release new Operating Systems or Browsers more often than every three years. (See? Not a good solution.)

I'm just not a big fan of how it's currently working.

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com