sales@net-ctrl.com
01473 281 211

Net-Ctrl Blog

Multi-Gigabit Use Cases

November 9th, 2018

These days, most access switches and end-user devices have 1 GbE ports, which are plentiful, highly competitive and affordable. Though currently a minority, the number of access points with 2.5 Gigabit Ethernet ports to support 802.11ac access points (APs) is increasing. Indeed, there is a range of devices – both on the market and those anticipated to launch – that support Ethernet switches with 2.5 GbE ports.

Unsurprisingly, switches with 2.5 GbE ports cost more than those with 1 GbE ports. Ruckus offers 2.5 GbE switches at a modest premium, although many other vendors sell 2.5 GbE, 5 GbE and 10 GbE ports that are more expensive and generally overkill for 802.11ac (Wi-Fi 5). Many 802.11ax (Wi-Fi 6) APs hitting the market will feature 5 GbE ports, although there are still few other devices expected to support 5 GbE.

When to use multi-gigabit connectivity

10 GbE Ethernet – which was part of the original 802.3bz standard – is primarily used for servers, storage and other devices in the data center. There are very few end-user devices that support 10GbE. However, more and more devices, such as laptops, point of sale units and video cameras are losing their tethers and moving to wireless connectivity. This increases the data load on wireless networks and drives the primary use case for 2.5 GbE and 5 GbE, as well as a new generation of access points. Multi-gigabit connectivity should be considered as organizations move to 802.11ac (Wi-Fi 5) and 802.11ax (Wi-Fi 6) and start implementing the next generation of Wi-Fi networks.

There are additional features to consider that go hand-in-hand with multigigabit connections, such as Power over Ethernet requirements (PoE) and future growth expectations. Indeed, it is important to understand the PoE power requirements for a new generation of access points equipped with multi-gigabit ports. Early APs routinely operated on PoE, consuming just 15 watts of power at the switch. However, more powerful radios consume more power. Even so, most APs today can still be powered by PoE or PoE+, the latter of which feeds 30 watts to the AP. However, while the latest 802.11ac (Wi-Fi 5) APs can operate on 30 watts of power, many need just a little more to achieve top performance – to drive all the radios and provide power to the USB port.

The newest generation of 802.11ax (Wi-Fi 6) APs is likely to require even more power than their predecessors. While 802.11ax (Wi-Fi 6) APs will operate on PoE+ power, they will demand more power to drive 8×8 radios and achieve peak performance. A new standard known as 802.3bt is expected to address the PoE requirements for 802.11ax (Wi-Fi 6) APs, as well as for devices such as LED lighting, pan-tilt-zoom (PTZ) cameras and HDTVs. 802.3bt – which incorporates both 60 watts and 90 watts of power per port – was ratified by the IEEE in September 2018. Organizations planning to deploy new switches with multi-gigabit connectivity should make sure they deliver sufficient PoE to support newer APs.

It should also be noted that there are detailed specifications for connections running at more than one gigabit per second over standard twisted-pair copper cabling. It is therefore important to understand the requirements and how they match existing cabling. The IEEE modified the 802.3bz standard in 2016 to add 2.5 gigabits and five gigabit Ethernet over twisted pair wiring. This was done specifically to support connecting new generations of Wi-Fi over copper without having to move to fiber optics.

The type of cabling that is required – both for one gigabit and 2.5 gigabit – can run over Cat 5e cabling for up to 100 meters. However, five gigabits per second requires Cat 6 cabling to run up to 100 meters and 10 gigabits per second requires Cat 6a. A significant number of buildings still only have Cat 5e cabling, in which case supporting faster speeds would require re-cabling a property. In practical terms, this means organizations should check the type of cabling currently installed in their buildings when considering an upgrade to multi-gigabit. If new cabling is required, organizations should be sure to calculate the upgrade costs and determine if moving to multi-gigabit is worth the expense.

Organizations should also be sure to understand the life-cycle of their infrastructure. More specifically, Wi-Fi standards, equipment, and gigabit usage are growing so rapidly that companies and organizations are refreshing their Wi-Fi access points approximately every three years. However, the switch lifecycle averages closer to five to seven years for commercial enterprises – and up to seven to ten years for the education market. So, organizations should ensure that new switch purchases will support current Wi-Fi networks and at least one more refresh cycle, if not more. During this period, they will see more users, more devices per users and a greater demand for throughput generated by streaming audio and video. Put simply, future-proofing switching is essential to protecting any network infrastructure investment.

View the original blog post by Rick Freedman at The Ruckus Room.

Cloud Security: How to Secure Your Sensitive Data in the Cloud

November 9th, 2018

In today’s always-connected world, an increasing number of organisations are moving their data to the cloud for operational efficiency, cost management, agility, scalability, etc.

As more data is produced, processed, and stored in the cloud – a prime target for cybercriminals who are always lurking around to lay their hands on organisations’ sensitive data – protecting the sensitive data that resides on the cloud becomes imperative.

While most Cloud Service Providers (CSPs) have already deployed strong front line defence systems like firewalls, anti-virus, anti-malware, cloud-security intrusion detection, etc. to thwart malicious attacks, sophisticated hackers are breaching them with surprising ease today. And once a hacker gains an inside entry by breaching the CSP’s perimeter defences, there is hardly anything that can be done to stop him from accessing an organisation’s sensitive data. Which is why more and more organisations are encrypting their cloud data today as a critical last line of defence against cyber attacks.

Data Encryption Is Not Enough

While data encryption definitely acts as a strong deterrence, merely encrypting the data is not enough in today’s perilous times where cyber attacks are getting more sophisticated with every passing day. Since the data physically resides with the CSP, it is out of the direct control of the organisations that own the data.

In a scenario like this where organisations encrypt their cloud data, storing the encryption keys securely and separately from the encrypted data is of paramount importance.

Enter BYOK

To ensure optimal protection of their data in the cloud, an increasing number of organisations are adopting a Bring Your Own Key (BYOK) approach that enables them to securely create and manage their own encryption keys, separate from the CSP’s where their sensitive data is being hosted.

However, as more encryption keys are created for an increasing number of cloud environments like Microsoft Azure, Amazon Web Services (AWS), Salesforce, etc., efficiently managing the encryption keys of individual cloud applications and securing the access, becomes very important. Which is why many organisations use External Key Management (EKM) solutions to cohesively manage all their encryption keys in a secure manner that is bereft of any unauthorised access.

Take the example of Office 365, Microsoft’s on-demand cloud application that is widely used by organisations across the globe to support employee mobility by facilitating anytime, anywhere access to Microsoft’s email application – MS Outlook and business utility applications like MS Word, Excel, PowerPoint, etc.

Gemalto’s BYOK solutions (SafeNet ProtectApp and SafeNet KeySecure) for Office 365 not only ensure that organisations have complete control over their encrypted cloud data but also seamlessly facilitate efficient management of the encryption keys of other cloud applications like Azure, AWS, Google Cloud and Salesforce.

Below is a quick snapshot of how SafeNet ProtectApp and SafeNet KeySecure seamlessly work with Azure BYOK:

To elaborate, below is the step-by-step process of how this works:

  1. SafeNet ProtectApp and KeySecure are used to generate a RSA Key Pair or required Key size using the FIPS 140-2 certified RNG of KeySecure.
  2. A Self-SignedCertificateUtility.jar (which is a Java-based application) then interacts with KeySecure using a TLS-protected NAE service to fetch the Key Pair and create a Self-signed Certificate.
  3. The Key Pair and Self-signed Certificate are stored securely in a PFX or P12 container that encrypts the contents using a Password-based Encryption (PBE) Key.
  4. The PFX file (which is an encrypted container using a PBE Key) is then uploaded on Azure Key Vault using Azure Web API / Rest.
  5. The transmission of the PFX file to the Azure Key Vault is protected using security mechanisms implemented by Azure on their Web API (TLS / SSL, etc.).
  6. Since the PFX files will be located on the same system on which the SelfSignedCertificateUtility.jar utility will be executed, industry-best security practices like ensuring pre-boot approval, enabling two-factor authentication (2FA), etc. should be followed.
  7. Once the Keys are loaded on Azure Key Vault, all encryption operations happen on Azure platform itself.

Continue to find out what to consider when choosing a Key Management solution, as well as how Gemalto can support organisations to make their BYOK journey easier.

To Sum It Up

As technology evolves, so do cybercriminals, and merely encrypting data no longer guarantees foolproof data protection today. While encrypting their sensitive cloud data, organisations must bear in mind that securing and managing the encryption keys is as important as the encryption itself.

To prevent unauthorized access and ensure that the encryption keys don’t fall in the wrong hands, cybersecurity experts unanimously recommend the use of Hardware Security Module (HSM) devices to securely store the encryption keys.

Since encryption keys pass through multiple phases during their lifetime – like generation, storage, distribution, backup, rotation and destruction, efficiently managing these keys at each and every stage of their lifecycle becomes important. A secure and centralized key management solution is critical.

Gemalto’s SafeNet KeySecure offers organisations a robust centralized platform that seamlessly manages all encryption keys. Below are some key benefits that make SafeNet KeySecure a preferred choice for organisations across the globe:

  1. Heterogeneous key management – helps in seamlessly managing multiple encryption keys at each stage of their lifecycle.
  2. Logging and auditing – helps in storing audit trails that can be analyzed by using any leading SIEM tools.
  3. Centralized management console – helps in assigning administrator roles according to the scope of their responsibilities.
  4. High Interoperability – supports a broad ecosystem of respected technology partners using the OASIS KMIP standard
  5. Reduces the overall cost of data security by offering automated operations.

Learn more about how Gemalto’s suite of cloud security solutions can help your organisation fully secure your data in the cloud.

View the original article at Gemalto.com.

Take a number, we’ll be right with you: Wi-Fi connections and capacity

November 7th, 2018

Wi-Fi connects the world, one device at a time. Literally. One. Device. At. A. Time. Wi-Fi is a half-duplex technology. This means only one device gets to transmit. All other devices sharing that channel have to wait their turn to make wi-fi connections. Yet we talk about high capacity and how many devices an AP can support. What does that mean if the answer is always one?

When more than one device is connected to an AP, they must share the air. All other things being equal, the devices and the AP (it counts as a device too!) will take turns transmitting. You could easily have 10, 50, 100, or more devices connected to an AP. But each still has to wait for its turn to talk.

If you want to sound like a Wi-Fi pro, you’ll need to understand a few things about capacity: how many Wi-Fi connections an AP can keep track of, how devices are trying to talk simultaneously, and how fast each can talk.

You might have 100 devices connected to an AP, but if only 10 need to transmit at a given time, you don’t have to wait long for your turn. The other 90 devices stay connected and hang out until they have something to say.

Now, imagine you’ve got 500 devices connected and 250 want to talk simultaneously. That’s like being stuck in line at the restroom during a concert and there are 249 people ahead of you. Yikes.

If all of the devices are fast, your turn will come much more quickly: think of your 802.11ac smartphone versus Grandma’s old 802.11g laptop. No matter what you do, the phone will be capable of going faster than the laptop. But that doesn’t mean they will get the same performance on all APs.

Ruckus helps you wring every last bit of speed out of any device with innovations like BeamFlex+, transient client management, auto RF cell sizing, airtime decongestion, and much more. When you’ve got a network with lots of Wi-Fi devices (why, hello, IoT), any extra performance boosts can make a big difference.

Read the original report at The Ruckus Room.

Roam If You Want To: Moving Wi-Fi Devices Around the World

October 30th, 2018

Wi-Fi means mobility. Devices that can move around and be free of all that pesky cabling. Roaming, the ability for a device to move from one AP to another, is critical. There is nothing more frustrating than sitting underneath an AP—you can see it, the AP is right there—but you aren’t getting a strong signal. What to do?

First, let’s address a common misconception: who tells a Wi-Fi device to roam? Most people will say the AP. In fact, the device decides and there is very little the AP—or any other device—can do about it.

Usually, the client stays connected to its AP until the signal strength drops and becomes too low. How low? Well, that’s up to the device. And all devices are a little bit different. A few allow you to adjust the roaming threshold setting, but most do not. A client device that should roam but doesn’t is known as a sticky client.

But you’re still here, standing under an AP with a device reporting a low signal. How do you get that stubborn thing to roam?

Fortunately, there are some tricks available. The first is a standards-based method that tells the device about the network and helps it make a better decision. The IEEE 802.11k standard allows a device to ask an AP about the network, including how well the AP can hear it, and vice versa. It can even get a list of neighbouring APs instead of having to scan for itself. Kind of like using a dating service versus going to a bar to meet someone new! More importantly, it gives the device a much better idea of whether it’s time to move on or stick with its current AP.

Another standard, 802.11v, allows an AP to request–politely—that the device move and even give a list of suggested APs the device could move to. Sounds great!

The downside to both of these is that the AP and the device each need to support the standard. Some do, but not all.

We mentioned that roaming—the decision by a device to disconnect from an AP and connect to a new one—is a device decision. But there is a way that an AP can “force” a device to move: it can send a de-authentication message that disconnects the device. Of course, the device will automatically try to reconnect. As part of the reconnection process, it scans its surroundings and—Egad! I’m right under an AP!—and connect to the closer AP.

All Ruckus APs support this. As a matter of fact, we combine this concept of forcing a device to move along with some other intelligence around exactly when to use it, plus standards like 802.11k and 802.11r. We call it SmartRoam+. You can call it helping Wi-Fi devices roam more quickly and seamlessly.

There’s a lot more to device roaming and we’ll save that for another post. But in the meantime, you can use this to get your “stuck” devices moving again.

View the original post at The Ruckus Room.

Data Encryption: From “Good-to-Have” To “Must-Have”

October 30th, 2018

Whenever the news of any data breach surfaces, the first action of most organisations is to take an immediate stock of their IT perimeter defences and update them to avoid getting breached themselves.

While it is definitely a good strategy to ensure that perimeter defence systems like firewalls, antivirus, antimalware, etc. that act as the first line of defence is always kept updated, focusing only on these defence mechanisms is no longer sufficient in today’s perilous times where hackers are breaching organisations’ cybersecurity more frequently than ever before.

As per the H1 results of Gemalto’s 2018 Breach Level Index, more than 3.3 billion data files were breached across the globe in the first six months of 2018 alone. This figure marks an increase of a whopping 72% over those recorded for H1 2017! And unsurprisingly, more than 96% of these breaches occurred on data that was not encrypted.

The latest victim of data theft in India is Pune-based digital lending startup EarlySalary, who suffered a massive data breach in which the personal details, employment status and mobile numbers of its 20,000 potential customers were stolen. The company discovered the breach only after they received a ransom demand from the hackers, following which they plugged the vulnerability. While the company claimed that the attack was centred on one of its older landing pages, the damage was already done.

With rising cyber attacks such as these, organisations can no longer live under the illusion that once they deploy robust perimeter defence systems, they are safe. Whether it is an attack on startups like EarlySalary that may have rudimentary perimeter defences or conglomerates like Facebook, SingHealth and Equifax that most likely had deployed top-notch front-line defence systems, the common denominator between the data breaches at all these organisations is that they focused only on their front line defences (perimeter security) while ignoring their last line of defence – data encryption.

Secure the Data, Not Just the Systems

While perimeter security mechanisms indeed act as a strong deterrent against cyber attacks, they are rendered completely useless once hackers gain an inside access to an organisation’s data files.

Whether the data is at rest, or in motion (during transfer), encrypting it is perhaps the surest way of safeguarding it against malicious attacks. Since encryption makes it virtually impossible to decipher the data without the corresponding decryption key, hackers have zero incentive in breaching organisations that have encrypted their data.

Below are three steps that organisations need to take to ensure optimal data protection:

1. Locate sensitive data

First, identify where your most sensitive data files reside – audit your storage and file servers, applications, databases and virtual machines, along with the data that’s flowing across your network and between data centers.

2. Encrypt & Tokenize it

When choosing a data encryption solution, make sure that it meets two important objectives – protecting your sensitive data at each stage and tokenizing it.

Gemalto’s SafeNet Data Encryption Solutions not only encrypt data seamlessly at each stage (at rest and in motion) but also incorporate a proprietary Tokenization Manager that automatically generates a random surrogate value (also known as a Token or Reference Key) for each data file to avoid easy identification.

3. Safeguard and manage your crypto keys

To ensure zero-compromise of your data’s encryption keys, it is important that the keys are stored securely and separately from your encrypted data. Use of Hardware Security Modules (HSMs) is perhaps the surest way of ensuring optimal key security.

When choosing a HSM solution, make sure that the solution also facilitates key management to manage the crypto keys at each stage of their lifecycle – like generation, storage, distribution, backup, rotation, and destruction.

Gemalto’s SafeNet HSMs come with an in-built Key Management feature that cohesively provides a single, robust, centralized platform that seamlessly manages the crypto keys at each stage of their lifecycle.

5 Reasons Why Data Encryption Becomes a MUST

With cyber attacks on the rise with every passing day, the cybersecurity landscape across the globe has witnessed a tectonic shift in the last few years. First-line of defence mechanisms like perimeter security are no longer sufficient to prevent data breaches, since after an intrusion, there is hardly anything that can be done to protect the data that is not encrypted.

Realising this, Governments across the globe are introducing stringent regulations like the General Data Protection Regulation (GDPR), RBI’s Data Localisation, PCI-DSS and the upcoming Personal Data Protection Law, 2018 in India to ensure that organisations make adequate security provisions to protect their users’ confidential data.

Below are a few reasons why data encryption is no longer “good-to-have”, but “must-have” in today’s world:

1. Encryption Protects Data At All Times

Whether the data is at rest or in motion (transit), encryption protects it against all cyber attacks, and in the event of one, renders it useless to attackers.

2. Encryption Maintains Data Integrity

Cyber criminals don’t always breach an organisation’s cybersecurity to steal sensitive information. As seen in the case of the Madhya Pradesh e-Tender Scam, many a times they breach organisations to alter sensitive data for monetary gains. Encryption maintains data integrity at all times and immediately red flags any alterations to the data.

3. Encryption Protects Privacy

Encryption ensures safety of users’ private data, such as their personal data, while upholding and protecting the users’ anonymity and privacy, that reduces surveillance opportunities by governments or cyber criminals. This is one of the primary reasons why Apple strongly believes that encryption will only strengthen our protection against cyberattacks and terrorism.

4. Encryption Protects Data Across Devices

In today’s increasingly Bring Your Own Device (BYOD) world, data transfer between multiple devices and networks opens avenues for cyber attacks and data thefts. Encryption eliminates these possibilities and safeguards data across all devices and networks, even during transit.

5. Encryption Facilitates Regulatory Compliance

To safeguard users’ personal data, organisations across many industries have to comply with stringent data protection regulations like HIPAA, GDPR, PCIDSS, RBI Data Localisation, FIPS, etc. that are mandated by local regulators. Encryption assures optimal data protection and ensures regulatory compliance.

It’s time for a new data security mindset. Learn how Gemalto’s 3-step Secure the Breach approach can help your organisation secure your sensitive data from cyber-attacks.

For more information contact Net-Ctrl direct through our Contact Page, or call us direct on 01473 281 211.

View the original article by Gemalto.

Multi-gigabit is right here, right now

October 26th, 2018

I recently came across an interesting TechTarget article that discusses when an organization should upgrade to multi-gigabit (mGig) switches to support a new generation of 802.11ax access points (APs). As we’ve previously discussed here on the Ruckus Room, the IEEE 802.11ax (Wi-Fi 6) standard features multiple enhancements that enable access points to offer an expected four-fold capacity increase over its 802.11ac Wave 2 predecessor (Wi-Fi 5) in dense scenarios.

The introduction of 802.11ax (Wi-Fi 6) access points is certainly timely, as many organizations are already pushing the limits of the 802.11ac (Wi-Fi 5) standard, particularly in high-density venues such as stadiums, convention centers, transportation hubs, and auditoriums. Indeed, the proliferation of connected devices, along with 4K video streaming, is placing unprecedented demands on networks across the globe.

To accommodate the demand for increased capacity, some organizations have begun deploying 802.11ax (Wi-Fi 6) access points alongside existing 802.11ac (Wi-Fi 5) access points, with the former expected to become the dominant enterprise Wi-Fi standard by 2021. To take full advantage of the speeds offered by 802.11ax (Wi-Fi 6) APs (up to 5 gigabits per second), organizations have also begun installing multi-gigabit switches to either replace or supplement older infrastructure. This is because system administrators cannot ensure a quality user experience by simply upgrading one part (access points) of a network. To reap the benefits of 802.11ax (Wi-Fi 6) requires upgrades on the switch side as well.

The transition to multi-gigabit switches

It is important to emphasize that the transition to multi-gigabit switches does not necessarily require a wholesale infrastructure upgrade. It can happen gradually adding a few switches as needed. Furthermore, most multi-gigabit switches today include a mix of multi-gigabit and gigabit ports. Only those ports connected to 802.11ax (Wi-Fi 6) APs require multi-gigabit speeds, while the other gigabit ports are adequate for computers, printers, VoIP phones, cameras, and other Ethernet devices.

With the introduction of 802.11ax (Wi-Fi 6) starting now and the approaching avalanche of IoT connections, higher speed in the wired infrastructure is critical to prevent bottlenecks and maintain optimal network performance. I suggest that the transition to multi-gigabit switches should start now. With the average life for a switch being 5 to 7 years and up to 10 years for many institutions, the need for multi-gigabit connections will almost certainly be upon us within this timeframe.

Read the original post by Rick Freedman at The Ruckus Room.

Facing the Facebook Breach: Why Simple SSO is Not Enough

October 26th, 2018

Let’s ‘face’ it. The September 2018 Facebook breach was not only a ‘mega’ breach in terms of the 50 millions of compromised users affected, but also a severe breach due to the popularity of the social media giant. To recap, cyber criminals got ahold of users’ FB login credentials. The breach was compounded by the fact that many users utilize their Facebook credentials to log into other social media sites, which means that the hackers actually access not only to a user’s Facebook account, but to all other accounts that use Facebook login credentials.

SSO not a social media fashion – it’s an enterprise must

In essence, the Facebook credentials act as a simple, or eat all you want Single Sign On (SSO) for other social platforms. But the popularity of SSO solutions is not just a Facebook fashion. It’s a viable business need, meant for the convenience of organizations that need access to their day to day web and cloud-based applications. Simple Single Sign On offers clear advantages for enterprises: no need to maintain a separate set of passwords for each and every application, reduction of IT overload and password reset requests; increased productivity for employees, contractors and remote workers to authenticate once and access everything they need, any time and any place.

The demand for SSO in enterprises has grown with the rise in the number of web and cloud-based apps. However, along with wide SSO solution implementation has come the risk associated with simple SSO. Only a month before the Facebook breach, the potential ‘massive’ security dangers of Single Sign On was discussed at the USENIX conference in Baltimore. The paper describes how criminals can gain controls of numerous other web services when an account is hacked.

Google+ access to 3rd party apps now a minus

When it comes to third party app violations, Google has not been spared. Its “Project Strobe” revealed stark findings related to their third-party access API – Google+ users. Due to a bug, third party apps were granted access to profile information about users not marked public to begin with. As a result, Google recommended sunsetting Google+ for consumers, concentrating R&D efforts to better control for enterprises on what account data they can choose to share with each app. Apps will need to show requested permission, one at a time, within each dialog box as opposed to all requested permission in a single screen.

Smart SSO with role-based policies

The risks that consumers were exposed to as a result of buffet-style sign on in the Facebook case, also apply to the enterprise. Fortunately, there is a solution: To maintain the convenience of single sign on without compromising on security, enterprises can use Smart Single Sign-On. With a smart SSO solution such as Gemalto’s SafeNet Trusted Access, enterprises can define conditional access policies. These policies can restrict or alleviate access to various application, depending on the risk. For example, groups of users would be able to authenticate only once when working in the office, but have to re-enter their password or other form of 2FA (i.e. SMS, pattern-based code, hardware token, etc.) for more restricted access.

To help increase trust without canning the convenience of SSO applicable to most apps and scenarios, stepping up authentication post-SSO login is an advantage. Enterprises can choose their access controls for specific user groups, sensitive apps and contextual conditions by applying scenario-based access policies.

Trusted Identity as a Service Provider

Using access management, enterprises can federate dozens of cloud applications without unnecessary burdens on IT teams, while keeping in place the necessary protections.

With Smart SSO, the proliferation of cloud apps needs not lead to a feast of security breach reports. To learn more about the smart face of single sign-on, and prevent an iDaaS-ter (Identity as a Service disaster), download the fact sheet, Matching Risk Policies to User Needs with Access Management, read more about Identity as a Service or watch how Gemalto SafeNet single sign-on solutions work in the cloud.

View the original post at Gemalto.com.

Gemalto’s vision for next generation digital security

October 25th, 2018

Digital transformation is a term that we’ve all heard a lot over the last 10 years, often in the context of a specific industry or process. But it’s undeniable now that the entire world is going through a digital transformation that is touching every aspect of our lives – how we live, how we work and how we discover the wider world around us.

An increasingly digital world means an ever-increasing number of pieces of data being exchanged every time we use an online service or a connected device. There are already billions of these exchanges taking place every day, and it’s estimated that by 2025, there will be 50 times more individual digital interactions than there were in 2010. This data defines our online lives, so being able to trust it is critical. With expectations from enterprises and consumers growing, both in the amount of information we share and how it’s protected, the challenge is a significant one.

Traditional security is no longer enough

Breaches are growing every year, across all sectors, with British Airways and Air Canada among the most recent high profile victims. Our Breach Level Index has tracked the number of data records lost or stolen since 2013, and with an estimated 5 million more being added every day, the total should easily hit a staggering 10 billion before the end of this year.

Technology firms have borne the brunt of these breaches but everyone is a target, from entertainment to healthcare and even education. In the majority of cases, the main cause of the attacks is identity theft. And once inside the network the real damage comes from unencrypted data – shockingly, 96% of breaches involved unencrypted data that the hacker could easily profit from (particularly in the case of credit card details).

The ever-growing list of high profile breaches shows that traditional security solutions are reaching their limits. Faced with a worldwide digital transformation that doesn’t look like it is set to slow down, we need to deploy a new generation of digital security solutions. This next-generation security must help organizations verify users’ identities in a purely online context. It must also remove the need for people to remember hundreds of (weak) passwords and shouldn’t add intrusive security steps (which is why I see a growing role for biometrics and risk-based authentication). Finally, it needs to ensure that individuals’ digital privacy is respected and their data isn’t monetized – unless they’ve given their express permission. If not people will leave the service and regulators will come down on offenders with heavy fines.

The portfolio of security services that we have built up over the last decade has put us in a unique position to help Service Providers and Governments answer these challenges by providing trusted digital identities to combat ID theft, and protection for previously unencrypted data.

Next generation digital security

Our strategic goal is to help our customers protect their entire digital service cycle from sign-up to opt-out. This starts when a new user has to prove his or her identity to register for a service, after which they are delivered a strong digital ID in the form of a smartcard, a digital token or by using biometric data. When they log-in, we can authenticate them using multiple factors and modes – from risk-based analysis to facial recognition. When using the service, we can encrypt all data using key management techniques and hardware security modules. And when they leave, cryptographic account deletion means their data is unrecoverable.

We believe that there are four key pillars to next-generation digital security:

  • Open. We don’t believe in building walls around digital assets. To add value, people and data must be able to flow in an open, decentralized, federated yet secure, way.
  • Proven. It’s not enough to just say you’re an expert in security – you have to prove it, time and time again. Companies need measurable fraud reduction and liability management, and our long-term blue-chip customers are the best evidence of our capability here.
  • Hybrid. Security tools must be designed to work in the real world. That means data security must be flexible enough to deal with a mix of hybrid, on-premise and cloud IT environments.
  • Convenient. If security stops people from doing what they need to do, it’s failed. We’re providing smooth user experiences by leveraging technology like biometrics to help make authentication frictionless and invisible.

We’re proud to play our part in protecting the world’s data, and enabling organizations across the globe to have successful digital transformations. As you may have seen from the announcement by Thales of the proposed acquisition of Gemalto, they have the same view of the growing needs for digital security as we do. The plan is to keep Gemalto’s scope intact and coherent within a new global business unit at Thales; our activities would be combined with Thales assets and expertise in cybersecurity, data analytics and artificial intelligence, which would only increase our ability to fulfil this mission.

Interested in reading more on our vision for generation security?next-

This article originally appeared on Philippe Vallée’s LinkedIn profile.

Encryption and the Fight Against Threats from Emerging Data-Driven Technologies

October 25th, 2018

It has been a year after the massive breach on credit reporting giant Equifax, which exposed 143 million U.S. consumers to identity theft and other losses. Today, even more businesses are exposed to rapidly changing technologies that are hungry to produce, share, and distribute data. This blog explores the dangers of leaving high-value, sensitive information unprotected. It also provides a three-step approach against inevitable data breaches with encryption at its core.

After Equifax, Do Emerging Technologies Bring New Dilemmas?

Few things are more disappointing than high-impact disasters that could have been averted. When the credit reporting giant Equifax announced that it was breached on May 2017, 143 million U.S. consumers’ personally identifiable information (PII) were stolen. Further investigation revealed that Equifax not only failed to apply critical vulnerability patches and perform regular security reviews but it also stored sensitive information in plaintext without encryption.

The Equifax breach, the worst data breach in history, was preventable. The attack roots from a critical vulnerability in Apache Struts that has a patch released since March 2017, two months before the breach. There are multiple ways to defend against an inevitable breach that use zero-day vulnerabilities, and one of the strongest is to encrypt high-value, sensitive data at rest.

Every day, approximately 7 million records are lost or stolen because of data breaches. Majority of data on these breaches were unsecured or unencrypted. A global study on the state of payment data security revealed that only 43% of companies use encryption or tokenization at the point of sales.

Today’s IT security experts face new challenges. Small businesses and organizations of the same size as Equifax have started to implement high technology trends in the fields of democratized artificial intelligence (AI), digitalized ecosystems, do-it-yourself biohacking, transparently immersive experiences and ubiquitous infrastructure. As emerging technologies spread into more industries beyond banks and government agencies, the risk of another Equifax disaster grows closer. IT security teams need to ensure that sensitive data are protected wherever it resides.

Breaking Myths about Encryption

Encryption can cover threat scenarios across a broad variety of data types. Out of all recorded breaches since 2013, only 4% were secure breaches, or those where encryption was used. Yet businesses tend to bypass it for perimeter defences and other newer technologies because of common misconceptions.

Many decision makers regard encryption as a costly solution that only applies to businesses with hardware compliance requirements. Encryption services, however, have grown to offer scalable data solutions. Encryption empowers businesses with the choice to encrypt data on one or more of the following levels: application, file, databases, and virtual machines. Encrypting data from the source, managing keys, and limiting access controls assures that data is protected on both the cloud provider’s and data owner’s ends.

Encrypting data is a flexible investment that ensures high levels of security and compliance for the most number of businesses. A reliable encryption service can free businesses from worrying about data tampering, unauthorized access, unsecure data transfers, and compliance issues.

In an age of inevitable data breaches, encryption is a necessary security measure that can render data inaccessible to attackers or useless to illegal vendors.

The Value of ‘Unsharing’ Your Sensitive Data

Today’s businesses require data to be shared in more places, where they rest at constant risk of theft or malicious access. Relying on perimeter protection alone is a reactive solution that leaves data unprotected from unknown and advanced threats, such as targeted attacks, new malware, or zero-day vulnerabilities.

More organizations are migrating data to the cloud, enabling big data analysis, and granting access to potential intellectual property or personally identifiable information. It is vital for organizations to start ‘unsharing’ sensitive data. But what does it mean to unshare?

Unsharing data means ensuring that high-value, sensitive information, such as intellectual property, personally identifiable information, and company financials, remain on lockdown wherever it resides. It means that only approved users and processes should be able to use the data.

This is where encryption comes in. To fully unshare data, organizations need to encrypt everything. Here are three steps on how to unshare and protect sensitive data through encryption:

  1. Locate sensitive data – Organizations need to identify where data resides in cloud and on-premise environments.
  2. Encrypt sensitive data – Security teams need to decide on the granular levels of data encryption to apply.
  3. Manage encryption keys – Security teams also need to manage and store keys for auditing and control.

Despite common myths surrounding data encryption, remember that its application ensures companies with the most returns by providing both data protection and authorized access. To know more about the value of unsharing your data and applying an encryption-centered security approach, you can read our ebook titled Unshare and Secure Sensitive Data – Encrypt Everything.

View the original post at Gemalto.com.

Wi-Fi 6 fundamentals: What is 1024-QAM?

October 25th, 2018

IDC sees Wi-Fi 6 (802.11ax) deployment ramping significantly in 2019 and becoming the dominant enterprise Wi-Fi standard by 2021. This is because many organizations still find themselves limited by the previous Wi-Fi 5 (802.11ac) standard. This is particularly the case in high-density venues such as stadiums, convention centres, transportation hubs, and auditoriums. With an expected four-fold capacity increase over its Wi-Fi 5 (802.11ac) predecessor, Wi-Fi 6 (802.11ax) is successfully transitioning Wi-Fi from a ‘best-effort’ endeavour to a deterministic wireless technology that is fast becoming the de-facto medium for internet connectivity.

Wi-Fi 6 (802.11ax) access points (APs) deployed in dense device environments such as those mentioned above support higher service-level agreements (SLAs) to more concurrently connected users and devices – with more diverse usage profiles. This is made possible by a range of technologies that optimize spectral efficiency, increase throughput and reduce power consumption. These include 1024- Quadrature Amplitude Modulation (QAM), Target Wake Time (TWT), Orthogonal Frequency-Division Multiple Access (OFDMA), BSS Coloring and MU-MIMO.

In this article, we’ll be taking a closer look at 1024-QAM and how Wi-Fi 6 (802.11ax) wireless access points can utilize this mechanism to significantly increase throughput.

1024-QAM

Quadrature amplitude modulation (QAM) is a highly developed modulation scheme used in the communication industry in which data is transmitted over radio frequencies. For wireless communications, QAM is a signal in which two carriers (two sinusoidal waves) shifted in phase by 90 degrees (a quarter out of phase) are modulated and the resultant output consists of both amplitude and phase variations. These variations form the basis for the transmitted binary bits, atoms of the digital world, that results in the information we see on our devices.


Two sinusoidal waves shifted by 90 degrees

By varying these sinusoidal waves through phase and amplitude, radio engineers can construct signals that transmit an ever-higher number of bits per hertz (information per signal). Systems designed to maximize spectral efficiency care a great deal about bits/hertz efficiency and thus are always employing techniques to construct ever denser QAM constellations to increase data rates. Put simply, higher QAM levels increase throughput capabilities in wireless devices. By varying the amplitude of the signal as well as the phase, Wi-Fi radios are able to construct the following constellation diagram that shows the values associated with the different states for a 16 QAM signal.

16-QAM constellation example

While the older Wi-Fi 5 (802.11ac) standard is limited to 256-QAM, the new Wi-Fi 6 (802.11ax) standard incorporates an extremely high optional modulation scheme (1024-QAM), with each symbol (a point on the constellation diagram) encoding a larger number of data bits when using a dense constellation. In real-world terms, 1024-QAM enables a 25% data rate increase (throughput) in Wi-Fi 6 (802.11ax) access points and devices. With over 30 billion connected “things” expected by 2020, higher wireless throughput facilitated by 1024-QAM is critical to ensuring quality-of-service (QoS) in high-density locations such as stadiums, convention centres, transportation hubs, and auditoriums. Indeed, applications such as 4K video streaming (which is becoming the norm) are expected to drive internet traffic to 278,108 petabytes per month by 2021.

Ensuring fast and reliable Wi-Fi coverage in high-density deployment scenarios with older Wi-Fi 5 (802.11ac) APs is increasingly difficult as streaming 4K video and AR/VR content becomes the norm. This is precisely why the new Wi-Fi 6 (802.11ax) standard offers up to a four-fold capacity increase over its Wi-Fi 5 (802.11ac) predecessor. With Wi-Fi 6 (802.11ax), multiple APs deployed in dense device environments can collectively deliver required quality-of-service to more clients with more diverse usage profiles.

This is made possible by a range of technologies – such as 1024-QAM – which enables a 25% data rate increase (throughput) in Wi-Fi 6 (802.11ax) access points and devices. From our perspective, Wi-Fi 6 (802.11ax) is playing a critical role in helping Wi-Fi evolve into a collision-free, deterministic wireless technology that dramatically increases aggregate network throughput to address high-density venues and beyond. Last, but certainly not least, Wi-Fi 6 (802.11ax) access points are also expected to help extend the Wi-Fi deployment cycle by providing tangible benefits for legacy wireless devices.

View the original post by Dennis Huang at The Ruckus Room.

Net-Ctrl Blog - mobile

Multi-Gigabit Use Cases

November 9th, 2018

These days, most access switches and end-user devices have 1 GbE ports, which are plentiful, highly competitive and affordable. Though currently a minority, the number of access points with 2.5 Gigabit Ethernet ports to support 802.11ac access points (APs) is increasing. Indeed, there is a range of devices – both on the market and those anticipated to launch – that support Ethernet switches with 2.5 GbE ports.

Unsurprisingly, switches with 2.5 GbE ports cost more than those with 1 GbE ports. Ruckus offers 2.5 GbE switches at a modest premium, although many other vendors sell 2.5 GbE, 5 GbE and 10 GbE ports that are more expensive and generally overkill for 802.11ac (Wi-Fi 5). Many 802.11ax (Wi-Fi 6) APs hitting the market will feature 5 GbE ports, although there are still few other devices expected to support 5 GbE.

When to use multi-gigabit connectivity

10 GbE Ethernet – which was part of the original 802.3bz standard – is primarily used for servers, storage and other devices in the data center. There are very few end-user devices that support 10GbE. However, more and more devices, such as laptops, point of sale units and video cameras are losing their tethers and moving to wireless connectivity. This increases the data load on wireless networks and drives the primary use case for 2.5 GbE and 5 GbE, as well as a new generation of access points. Multi-gigabit connectivity should be considered as organizations move to 802.11ac (Wi-Fi 5) and 802.11ax (Wi-Fi 6) and start implementing the next generation of Wi-Fi networks.

There are additional features to consider that go hand-in-hand with multigigabit connections, such as Power over Ethernet requirements (PoE) and future growth expectations. Indeed, it is important to understand the PoE power requirements for a new generation of access points equipped with multi-gigabit ports. Early APs routinely operated on PoE, consuming just 15 watts of power at the switch. However, more powerful radios consume more power. Even so, most APs today can still be powered by PoE or PoE+, the latter of which feeds 30 watts to the AP. However, while the latest 802.11ac (Wi-Fi 5) APs can operate on 30 watts of power, many need just a little more to achieve top performance – to drive all the radios and provide power to the USB port.

The newest generation of 802.11ax (Wi-Fi 6) APs is likely to require even more power than their predecessors. While 802.11ax (Wi-Fi 6) APs will operate on PoE+ power, they will demand more power to drive 8×8 radios and achieve peak performance. A new standard known as 802.3bt is expected to address the PoE requirements for 802.11ax (Wi-Fi 6) APs, as well as for devices such as LED lighting, pan-tilt-zoom (PTZ) cameras and HDTVs. 802.3bt – which incorporates both 60 watts and 90 watts of power per port – was ratified by the IEEE in September 2018. Organizations planning to deploy new switches with multi-gigabit connectivity should make sure they deliver sufficient PoE to support newer APs.

It should also be noted that there are detailed specifications for connections running at more than one gigabit per second over standard twisted-pair copper cabling. It is therefore important to understand the requirements and how they match existing cabling. The IEEE modified the 802.3bz standard in 2016 to add 2.5 gigabits and five gigabit Ethernet over twisted pair wiring. This was done specifically to support connecting new generations of Wi-Fi over copper without having to move to fiber optics.

The type of cabling that is required – both for one gigabit and 2.5 gigabit – can run over Cat 5e cabling for up to 100 meters. However, five gigabits per second requires Cat 6 cabling to run up to 100 meters and 10 gigabits per second requires Cat 6a. A significant number of buildings still only have Cat 5e cabling, in which case supporting faster speeds would require re-cabling a property. In practical terms, this means organizations should check the type of cabling currently installed in their buildings when considering an upgrade to multi-gigabit. If new cabling is required, organizations should be sure to calculate the upgrade costs and determine if moving to multi-gigabit is worth the expense.

Organizations should also be sure to understand the life-cycle of their infrastructure. More specifically, Wi-Fi standards, equipment, and gigabit usage are growing so rapidly that companies and organizations are refreshing their Wi-Fi access points approximately every three years. However, the switch lifecycle averages closer to five to seven years for commercial enterprises – and up to seven to ten years for the education market. So, organizations should ensure that new switch purchases will support current Wi-Fi networks and at least one more refresh cycle, if not more. During this period, they will see more users, more devices per users and a greater demand for throughput generated by streaming audio and video. Put simply, future-proofing switching is essential to protecting any network infrastructure investment.

View the original blog post by Rick Freedman at The Ruckus Room.

Cloud Security: How to Secure Your Sensitive Data in the Cloud

November 9th, 2018

In today’s always-connected world, an increasing number of organisations are moving their data to the cloud for operational efficiency, cost management, agility, scalability, etc.

As more data is produced, processed, and stored in the cloud – a prime target for cybercriminals who are always lurking around to lay their hands on organisations’ sensitive data – protecting the sensitive data that resides on the cloud becomes imperative.

While most Cloud Service Providers (CSPs) have already deployed strong front line defence systems like firewalls, anti-virus, anti-malware, cloud-security intrusion detection, etc. to thwart malicious attacks, sophisticated hackers are breaching them with surprising ease today. And once a hacker gains an inside entry by breaching the CSP’s perimeter defences, there is hardly anything that can be done to stop him from accessing an organisation’s sensitive data. Which is why more and more organisations are encrypting their cloud data today as a critical last line of defence against cyber attacks.

Data Encryption Is Not Enough

While data encryption definitely acts as a strong deterrence, merely encrypting the data is not enough in today’s perilous times where cyber attacks are getting more sophisticated with every passing day. Since the data physically resides with the CSP, it is out of the direct control of the organisations that own the data.

In a scenario like this where organisations encrypt their cloud data, storing the encryption keys securely and separately from the encrypted data is of paramount importance.

Enter BYOK

To ensure optimal protection of their data in the cloud, an increasing number of organisations are adopting a Bring Your Own Key (BYOK) approach that enables them to securely create and manage their own encryption keys, separate from the CSP’s where their sensitive data is being hosted.

However, as more encryption keys are created for an increasing number of cloud environments like Microsoft Azure, Amazon Web Services (AWS), Salesforce, etc., efficiently managing the encryption keys of individual cloud applications and securing the access, becomes very important. Which is why many organisations use External Key Management (EKM) solutions to cohesively manage all their encryption keys in a secure manner that is bereft of any unauthorised access.

Take the example of Office 365, Microsoft’s on-demand cloud application that is widely used by organisations across the globe to support employee mobility by facilitating anytime, anywhere access to Microsoft’s email application – MS Outlook and business utility applications like MS Word, Excel, PowerPoint, etc.

Gemalto’s BYOK solutions (SafeNet ProtectApp and SafeNet KeySecure) for Office 365 not only ensure that organisations have complete control over their encrypted cloud data but also seamlessly facilitate efficient management of the encryption keys of other cloud applications like Azure, AWS, Google Cloud and Salesforce.

Below is a quick snapshot of how SafeNet ProtectApp and SafeNet KeySecure seamlessly work with Azure BYOK:

To elaborate, below is the step-by-step process of how this works:

  1. SafeNet ProtectApp and KeySecure are used to generate a RSA Key Pair or required Key size using the FIPS 140-2 certified RNG of KeySecure.
  2. A Self-SignedCertificateUtility.jar (which is a Java-based application) then interacts with KeySecure using a TLS-protected NAE service to fetch the Key Pair and create a Self-signed Certificate.
  3. The Key Pair and Self-signed Certificate are stored securely in a PFX or P12 container that encrypts the contents using a Password-based Encryption (PBE) Key.
  4. The PFX file (which is an encrypted container using a PBE Key) is then uploaded on Azure Key Vault using Azure Web API / Rest.
  5. The transmission of the PFX file to the Azure Key Vault is protected using security mechanisms implemented by Azure on their Web API (TLS / SSL, etc.).
  6. Since the PFX files will be located on the same system on which the SelfSignedCertificateUtility.jar utility will be executed, industry-best security practices like ensuring pre-boot approval, enabling two-factor authentication (2FA), etc. should be followed.
  7. Once the Keys are loaded on Azure Key Vault, all encryption operations happen on Azure platform itself.

Continue to find out what to consider when choosing a Key Management solution, as well as how Gemalto can support organisations to make their BYOK journey easier.

To Sum It Up

As technology evolves, so do cybercriminals, and merely encrypting data no longer guarantees foolproof data protection today. While encrypting their sensitive cloud data, organisations must bear in mind that securing and managing the encryption keys is as important as the encryption itself.

To prevent unauthorized access and ensure that the encryption keys don’t fall in the wrong hands, cybersecurity experts unanimously recommend the use of Hardware Security Module (HSM) devices to securely store the encryption keys.

Since encryption keys pass through multiple phases during their lifetime – like generation, storage, distribution, backup, rotation and destruction, efficiently managing these keys at each and every stage of their lifecycle becomes important. A secure and centralized key management solution is critical.

Gemalto’s SafeNet KeySecure offers organisations a robust centralized platform that seamlessly manages all encryption keys. Below are some key benefits that make SafeNet KeySecure a preferred choice for organisations across the globe:

  1. Heterogeneous key management – helps in seamlessly managing multiple encryption keys at each stage of their lifecycle.
  2. Logging and auditing – helps in storing audit trails that can be analyzed by using any leading SIEM tools.
  3. Centralized management console – helps in assigning administrator roles according to the scope of their responsibilities.
  4. High Interoperability – supports a broad ecosystem of respected technology partners using the OASIS KMIP standard
  5. Reduces the overall cost of data security by offering automated operations.

Learn more about how Gemalto’s suite of cloud security solutions can help your organisation fully secure your data in the cloud.

View the original article at Gemalto.com.

Take a number, we’ll be right with you: Wi-Fi connections and capacity

November 7th, 2018

Wi-Fi connects the world, one device at a time. Literally. One. Device. At. A. Time. Wi-Fi is a half-duplex technology. This means only one device gets to transmit. All other devices sharing that channel have to wait their turn to make wi-fi connections. Yet we talk about high capacity and how many devices an AP can support. What does that mean if the answer is always one?

When more than one device is connected to an AP, they must share the air. All other things being equal, the devices and the AP (it counts as a device too!) will take turns transmitting. You could easily have 10, 50, 100, or more devices connected to an AP. But each still has to wait for its turn to talk.

If you want to sound like a Wi-Fi pro, you’ll need to understand a few things about capacity: how many Wi-Fi connections an AP can keep track of, how devices are trying to talk simultaneously, and how fast each can talk.

You might have 100 devices connected to an AP, but if only 10 need to transmit at a given time, you don’t have to wait long for your turn. The other 90 devices stay connected and hang out until they have something to say.

Now, imagine you’ve got 500 devices connected and 250 want to talk simultaneously. That’s like being stuck in line at the restroom during a concert and there are 249 people ahead of you. Yikes.

If all of the devices are fast, your turn will come much more quickly: think of your 802.11ac smartphone versus Grandma’s old 802.11g laptop. No matter what you do, the phone will be capable of going faster than the laptop. But that doesn’t mean they will get the same performance on all APs.

Ruckus helps you wring every last bit of speed out of any device with innovations like BeamFlex+, transient client management, auto RF cell sizing, airtime decongestion, and much more. When you’ve got a network with lots of Wi-Fi devices (why, hello, IoT), any extra performance boosts can make a big difference.

Read the original report at The Ruckus Room.

Roam If You Want To: Moving Wi-Fi Devices Around the World

October 30th, 2018

Wi-Fi means mobility. Devices that can move around and be free of all that pesky cabling. Roaming, the ability for a device to move from one AP to another, is critical. There is nothing more frustrating than sitting underneath an AP—you can see it, the AP is right there—but you aren’t getting a strong signal. What to do?

First, let’s address a common misconception: who tells a Wi-Fi device to roam? Most people will say the AP. In fact, the device decides and there is very little the AP—or any other device—can do about it.

Usually, the client stays connected to its AP until the signal strength drops and becomes too low. How low? Well, that’s up to the device. And all devices are a little bit different. A few allow you to adjust the roaming threshold setting, but most do not. A client device that should roam but doesn’t is known as a sticky client.

But you’re still here, standing under an AP with a device reporting a low signal. How do you get that stubborn thing to roam?

Fortunately, there are some tricks available. The first is a standards-based method that tells the device about the network and helps it make a better decision. The IEEE 802.11k standard allows a device to ask an AP about the network, including how well the AP can hear it, and vice versa. It can even get a list of neighbouring APs instead of having to scan for itself. Kind of like using a dating service versus going to a bar to meet someone new! More importantly, it gives the device a much better idea of whether it’s time to move on or stick with its current AP.

Another standard, 802.11v, allows an AP to request–politely—that the device move and even give a list of suggested APs the device could move to. Sounds great!

The downside to both of these is that the AP and the device each need to support the standard. Some do, but not all.

We mentioned that roaming—the decision by a device to disconnect from an AP and connect to a new one—is a device decision. But there is a way that an AP can “force” a device to move: it can send a de-authentication message that disconnects the device. Of course, the device will automatically try to reconnect. As part of the reconnection process, it scans its surroundings and—Egad! I’m right under an AP!—and connect to the closer AP.

All Ruckus APs support this. As a matter of fact, we combine this concept of forcing a device to move along with some other intelligence around exactly when to use it, plus standards like 802.11k and 802.11r. We call it SmartRoam+. You can call it helping Wi-Fi devices roam more quickly and seamlessly.

There’s a lot more to device roaming and we’ll save that for another post. But in the meantime, you can use this to get your “stuck” devices moving again.

View the original post at The Ruckus Room.

Data Encryption: From “Good-to-Have” To “Must-Have”

October 30th, 2018

Whenever the news of any data breach surfaces, the first action of most organisations is to take an immediate stock of their IT perimeter defences and update them to avoid getting breached themselves.

While it is definitely a good strategy to ensure that perimeter defence systems like firewalls, antivirus, antimalware, etc. that act as the first line of defence is always kept updated, focusing only on these defence mechanisms is no longer sufficient in today’s perilous times where hackers are breaching organisations’ cybersecurity more frequently than ever before.

As per the H1 results of Gemalto’s 2018 Breach Level Index, more than 3.3 billion data files were breached across the globe in the first six months of 2018 alone. This figure marks an increase of a whopping 72% over those recorded for H1 2017! And unsurprisingly, more than 96% of these breaches occurred on data that was not encrypted.

The latest victim of data theft in India is Pune-based digital lending startup EarlySalary, who suffered a massive data breach in which the personal details, employment status and mobile numbers of its 20,000 potential customers were stolen. The company discovered the breach only after they received a ransom demand from the hackers, following which they plugged the vulnerability. While the company claimed that the attack was centred on one of its older landing pages, the damage was already done.

With rising cyber attacks such as these, organisations can no longer live under the illusion that once they deploy robust perimeter defence systems, they are safe. Whether it is an attack on startups like EarlySalary that may have rudimentary perimeter defences or conglomerates like Facebook, SingHealth and Equifax that most likely had deployed top-notch front-line defence systems, the common denominator between the data breaches at all these organisations is that they focused only on their front line defences (perimeter security) while ignoring their last line of defence – data encryption.

Secure the Data, Not Just the Systems

While perimeter security mechanisms indeed act as a strong deterrent against cyber attacks, they are rendered completely useless once hackers gain an inside access to an organisation’s data files.

Whether the data is at rest, or in motion (during transfer), encrypting it is perhaps the surest way of safeguarding it against malicious attacks. Since encryption makes it virtually impossible to decipher the data without the corresponding decryption key, hackers have zero incentive in breaching organisations that have encrypted their data.

Below are three steps that organisations need to take to ensure optimal data protection:

1. Locate sensitive data

First, identify where your most sensitive data files reside – audit your storage and file servers, applications, databases and virtual machines, along with the data that’s flowing across your network and between data centers.

2. Encrypt & Tokenize it

When choosing a data encryption solution, make sure that it meets two important objectives – protecting your sensitive data at each stage and tokenizing it.

Gemalto’s SafeNet Data Encryption Solutions not only encrypt data seamlessly at each stage (at rest and in motion) but also incorporate a proprietary Tokenization Manager that automatically generates a random surrogate value (also known as a Token or Reference Key) for each data file to avoid easy identification.

3. Safeguard and manage your crypto keys

To ensure zero-compromise of your data’s encryption keys, it is important that the keys are stored securely and separately from your encrypted data. Use of Hardware Security Modules (HSMs) is perhaps the surest way of ensuring optimal key security.

When choosing a HSM solution, make sure that the solution also facilitates key management to manage the crypto keys at each stage of their lifecycle – like generation, storage, distribution, backup, rotation, and destruction.

Gemalto’s SafeNet HSMs come with an in-built Key Management feature that cohesively provides a single, robust, centralized platform that seamlessly manages the crypto keys at each stage of their lifecycle.

5 Reasons Why Data Encryption Becomes a MUST

With cyber attacks on the rise with every passing day, the cybersecurity landscape across the globe has witnessed a tectonic shift in the last few years. First-line of defence mechanisms like perimeter security are no longer sufficient to prevent data breaches, since after an intrusion, there is hardly anything that can be done to protect the data that is not encrypted.

Realising this, Governments across the globe are introducing stringent regulations like the General Data Protection Regulation (GDPR), RBI’s Data Localisation, PCI-DSS and the upcoming Personal Data Protection Law, 2018 in India to ensure that organisations make adequate security provisions to protect their users’ confidential data.

Below are a few reasons why data encryption is no longer “good-to-have”, but “must-have” in today’s world:

1. Encryption Protects Data At All Times

Whether the data is at rest or in motion (transit), encryption protects it against all cyber attacks, and in the event of one, renders it useless to attackers.

2. Encryption Maintains Data Integrity

Cyber criminals don’t always breach an organisation’s cybersecurity to steal sensitive information. As seen in the case of the Madhya Pradesh e-Tender Scam, many a times they breach organisations to alter sensitive data for monetary gains. Encryption maintains data integrity at all times and immediately red flags any alterations to the data.

3. Encryption Protects Privacy

Encryption ensures safety of users’ private data, such as their personal data, while upholding and protecting the users’ anonymity and privacy, that reduces surveillance opportunities by governments or cyber criminals. This is one of the primary reasons why Apple strongly believes that encryption will only strengthen our protection against cyberattacks and terrorism.

4. Encryption Protects Data Across Devices

In today’s increasingly Bring Your Own Device (BYOD) world, data transfer between multiple devices and networks opens avenues for cyber attacks and data thefts. Encryption eliminates these possibilities and safeguards data across all devices and networks, even during transit.

5. Encryption Facilitates Regulatory Compliance

To safeguard users’ personal data, organisations across many industries have to comply with stringent data protection regulations like HIPAA, GDPR, PCIDSS, RBI Data Localisation, FIPS, etc. that are mandated by local regulators. Encryption assures optimal data protection and ensures regulatory compliance.

It’s time for a new data security mindset. Learn how Gemalto’s 3-step Secure the Breach approach can help your organisation secure your sensitive data from cyber-attacks.

For more information contact Net-Ctrl direct through our Contact Page, or call us direct on 01473 281 211.

View the original article by Gemalto.

Multi-gigabit is right here, right now

October 26th, 2018

I recently came across an interesting TechTarget article that discusses when an organization should upgrade to multi-gigabit (mGig) switches to support a new generation of 802.11ax access points (APs). As we’ve previously discussed here on the Ruckus Room, the IEEE 802.11ax (Wi-Fi 6) standard features multiple enhancements that enable access points to offer an expected four-fold capacity increase over its 802.11ac Wave 2 predecessor (Wi-Fi 5) in dense scenarios.

The introduction of 802.11ax (Wi-Fi 6) access points is certainly timely, as many organizations are already pushing the limits of the 802.11ac (Wi-Fi 5) standard, particularly in high-density venues such as stadiums, convention centers, transportation hubs, and auditoriums. Indeed, the proliferation of connected devices, along with 4K video streaming, is placing unprecedented demands on networks across the globe.

To accommodate the demand for increased capacity, some organizations have begun deploying 802.11ax (Wi-Fi 6) access points alongside existing 802.11ac (Wi-Fi 5) access points, with the former expected to become the dominant enterprise Wi-Fi standard by 2021. To take full advantage of the speeds offered by 802.11ax (Wi-Fi 6) APs (up to 5 gigabits per second), organizations have also begun installing multi-gigabit switches to either replace or supplement older infrastructure. This is because system administrators cannot ensure a quality user experience by simply upgrading one part (access points) of a network. To reap the benefits of 802.11ax (Wi-Fi 6) requires upgrades on the switch side as well.

The transition to multi-gigabit switches

It is important to emphasize that the transition to multi-gigabit switches does not necessarily require a wholesale infrastructure upgrade. It can happen gradually adding a few switches as needed. Furthermore, most multi-gigabit switches today include a mix of multi-gigabit and gigabit ports. Only those ports connected to 802.11ax (Wi-Fi 6) APs require multi-gigabit speeds, while the other gigabit ports are adequate for computers, printers, VoIP phones, cameras, and other Ethernet devices.

With the introduction of 802.11ax (Wi-Fi 6) starting now and the approaching avalanche of IoT connections, higher speed in the wired infrastructure is critical to prevent bottlenecks and maintain optimal network performance. I suggest that the transition to multi-gigabit switches should start now. With the average life for a switch being 5 to 7 years and up to 10 years for many institutions, the need for multi-gigabit connections will almost certainly be upon us within this timeframe.

Read the original post by Rick Freedman at The Ruckus Room.

Facing the Facebook Breach: Why Simple SSO is Not Enough

October 26th, 2018

Let’s ‘face’ it. The September 2018 Facebook breach was not only a ‘mega’ breach in terms of the 50 millions of compromised users affected, but also a severe breach due to the popularity of the social media giant. To recap, cyber criminals got ahold of users’ FB login credentials. The breach was compounded by the fact that many users utilize their Facebook credentials to log into other social media sites, which means that the hackers actually access not only to a user’s Facebook account, but to all other accounts that use Facebook login credentials.

SSO not a social media fashion – it’s an enterprise must

In essence, the Facebook credentials act as a simple, or eat all you want Single Sign On (SSO) for other social platforms. But the popularity of SSO solutions is not just a Facebook fashion. It’s a viable business need, meant for the convenience of organizations that need access to their day to day web and cloud-based applications. Simple Single Sign On offers clear advantages for enterprises: no need to maintain a separate set of passwords for each and every application, reduction of IT overload and password reset requests; increased productivity for employees, contractors and remote workers to authenticate once and access everything they need, any time and any place.

The demand for SSO in enterprises has grown with the rise in the number of web and cloud-based apps. However, along with wide SSO solution implementation has come the risk associated with simple SSO. Only a month before the Facebook breach, the potential ‘massive’ security dangers of Single Sign On was discussed at the USENIX conference in Baltimore. The paper describes how criminals can gain controls of numerous other web services when an account is hacked.

Google+ access to 3rd party apps now a minus

When it comes to third party app violations, Google has not been spared. Its “Project Strobe” revealed stark findings related to their third-party access API – Google+ users. Due to a bug, third party apps were granted access to profile information about users not marked public to begin with. As a result, Google recommended sunsetting Google+ for consumers, concentrating R&D efforts to better control for enterprises on what account data they can choose to share with each app. Apps will need to show requested permission, one at a time, within each dialog box as opposed to all requested permission in a single screen.

Smart SSO with role-based policies

The risks that consumers were exposed to as a result of buffet-style sign on in the Facebook case, also apply to the enterprise. Fortunately, there is a solution: To maintain the convenience of single sign on without compromising on security, enterprises can use Smart Single Sign-On. With a smart SSO solution such as Gemalto’s SafeNet Trusted Access, enterprises can define conditional access policies. These policies can restrict or alleviate access to various application, depending on the risk. For example, groups of users would be able to authenticate only once when working in the office, but have to re-enter their password or other form of 2FA (i.e. SMS, pattern-based code, hardware token, etc.) for more restricted access.

To help increase trust without canning the convenience of SSO applicable to most apps and scenarios, stepping up authentication post-SSO login is an advantage. Enterprises can choose their access controls for specific user groups, sensitive apps and contextual conditions by applying scenario-based access policies.

Trusted Identity as a Service Provider

Using access management, enterprises can federate dozens of cloud applications without unnecessary burdens on IT teams, while keeping in place the necessary protections.

With Smart SSO, the proliferation of cloud apps needs not lead to a feast of security breach reports. To learn more about the smart face of single sign-on, and prevent an iDaaS-ter (Identity as a Service disaster), download the fact sheet, Matching Risk Policies to User Needs with Access Management, read more about Identity as a Service or watch how Gemalto SafeNet single sign-on solutions work in the cloud.

View the original post at Gemalto.com.

Gemalto’s vision for next generation digital security

October 25th, 2018

Digital transformation is a term that we’ve all heard a lot over the last 10 years, often in the context of a specific industry or process. But it’s undeniable now that the entire world is going through a digital transformation that is touching every aspect of our lives – how we live, how we work and how we discover the wider world around us.

An increasingly digital world means an ever-increasing number of pieces of data being exchanged every time we use an online service or a connected device. There are already billions of these exchanges taking place every day, and it’s estimated that by 2025, there will be 50 times more individual digital interactions than there were in 2010. This data defines our online lives, so being able to trust it is critical. With expectations from enterprises and consumers growing, both in the amount of information we share and how it’s protected, the challenge is a significant one.

Traditional security is no longer enough

Breaches are growing every year, across all sectors, with British Airways and Air Canada among the most recent high profile victims. Our Breach Level Index has tracked the number of data records lost or stolen since 2013, and with an estimated 5 million more being added every day, the total should easily hit a staggering 10 billion before the end of this year.

Technology firms have borne the brunt of these breaches but everyone is a target, from entertainment to healthcare and even education. In the majority of cases, the main cause of the attacks is identity theft. And once inside the network the real damage comes from unencrypted data – shockingly, 96% of breaches involved unencrypted data that the hacker could easily profit from (particularly in the case of credit card details).

The ever-growing list of high profile breaches shows that traditional security solutions are reaching their limits. Faced with a worldwide digital transformation that doesn’t look like it is set to slow down, we need to deploy a new generation of digital security solutions. This next-generation security must help organizations verify users’ identities in a purely online context. It must also remove the need for people to remember hundreds of (weak) passwords and shouldn’t add intrusive security steps (which is why I see a growing role for biometrics and risk-based authentication). Finally, it needs to ensure that individuals’ digital privacy is respected and their data isn’t monetized – unless they’ve given their express permission. If not people will leave the service and regulators will come down on offenders with heavy fines.

The portfolio of security services that we have built up over the last decade has put us in a unique position to help Service Providers and Governments answer these challenges by providing trusted digital identities to combat ID theft, and protection for previously unencrypted data.

Next generation digital security

Our strategic goal is to help our customers protect their entire digital service cycle from sign-up to opt-out. This starts when a new user has to prove his or her identity to register for a service, after which they are delivered a strong digital ID in the form of a smartcard, a digital token or by using biometric data. When they log-in, we can authenticate them using multiple factors and modes – from risk-based analysis to facial recognition. When using the service, we can encrypt all data using key management techniques and hardware security modules. And when they leave, cryptographic account deletion means their data is unrecoverable.

We believe that there are four key pillars to next-generation digital security:

  • Open. We don’t believe in building walls around digital assets. To add value, people and data must be able to flow in an open, decentralized, federated yet secure, way.
  • Proven. It’s not enough to just say you’re an expert in security – you have to prove it, time and time again. Companies need measurable fraud reduction and liability management, and our long-term blue-chip customers are the best evidence of our capability here.
  • Hybrid. Security tools must be designed to work in the real world. That means data security must be flexible enough to deal with a mix of hybrid, on-premise and cloud IT environments.
  • Convenient. If security stops people from doing what they need to do, it’s failed. We’re providing smooth user experiences by leveraging technology like biometrics to help make authentication frictionless and invisible.

We’re proud to play our part in protecting the world’s data, and enabling organizations across the globe to have successful digital transformations. As you may have seen from the announcement by Thales of the proposed acquisition of Gemalto, they have the same view of the growing needs for digital security as we do. The plan is to keep Gemalto’s scope intact and coherent within a new global business unit at Thales; our activities would be combined with Thales assets and expertise in cybersecurity, data analytics and artificial intelligence, which would only increase our ability to fulfil this mission.

Interested in reading more on our vision for generation security?next-

This article originally appeared on Philippe Vallée’s LinkedIn profile.

Encryption and the Fight Against Threats from Emerging Data-Driven Technologies

October 25th, 2018

It has been a year after the massive breach on credit reporting giant Equifax, which exposed 143 million U.S. consumers to identity theft and other losses. Today, even more businesses are exposed to rapidly changing technologies that are hungry to produce, share, and distribute data. This blog explores the dangers of leaving high-value, sensitive information unprotected. It also provides a three-step approach against inevitable data breaches with encryption at its core.

After Equifax, Do Emerging Technologies Bring New Dilemmas?

Few things are more disappointing than high-impact disasters that could have been averted. When the credit reporting giant Equifax announced that it was breached on May 2017, 143 million U.S. consumers’ personally identifiable information (PII) were stolen. Further investigation revealed that Equifax not only failed to apply critical vulnerability patches and perform regular security reviews but it also stored sensitive information in plaintext without encryption.

The Equifax breach, the worst data breach in history, was preventable. The attack roots from a critical vulnerability in Apache Struts that has a patch released since March 2017, two months before the breach. There are multiple ways to defend against an inevitable breach that use zero-day vulnerabilities, and one of the strongest is to encrypt high-value, sensitive data at rest.

Every day, approximately 7 million records are lost or stolen because of data breaches. Majority of data on these breaches were unsecured or unencrypted. A global study on the state of payment data security revealed that only 43% of companies use encryption or tokenization at the point of sales.

Today’s IT security experts face new challenges. Small businesses and organizations of the same size as Equifax have started to implement high technology trends in the fields of democratized artificial intelligence (AI), digitalized ecosystems, do-it-yourself biohacking, transparently immersive experiences and ubiquitous infrastructure. As emerging technologies spread into more industries beyond banks and government agencies, the risk of another Equifax disaster grows closer. IT security teams need to ensure that sensitive data are protected wherever it resides.

Breaking Myths about Encryption

Encryption can cover threat scenarios across a broad variety of data types. Out of all recorded breaches since 2013, only 4% were secure breaches, or those where encryption was used. Yet businesses tend to bypass it for perimeter defences and other newer technologies because of common misconceptions.

Many decision makers regard encryption as a costly solution that only applies to businesses with hardware compliance requirements. Encryption services, however, have grown to offer scalable data solutions. Encryption empowers businesses with the choice to encrypt data on one or more of the following levels: application, file, databases, and virtual machines. Encrypting data from the source, managing keys, and limiting access controls assures that data is protected on both the cloud provider’s and data owner’s ends.

Encrypting data is a flexible investment that ensures high levels of security and compliance for the most number of businesses. A reliable encryption service can free businesses from worrying about data tampering, unauthorized access, unsecure data transfers, and compliance issues.

In an age of inevitable data breaches, encryption is a necessary security measure that can render data inaccessible to attackers or useless to illegal vendors.

The Value of ‘Unsharing’ Your Sensitive Data

Today’s businesses require data to be shared in more places, where they rest at constant risk of theft or malicious access. Relying on perimeter protection alone is a reactive solution that leaves data unprotected from unknown and advanced threats, such as targeted attacks, new malware, or zero-day vulnerabilities.

More organizations are migrating data to the cloud, enabling big data analysis, and granting access to potential intellectual property or personally identifiable information. It is vital for organizations to start ‘unsharing’ sensitive data. But what does it mean to unshare?

Unsharing data means ensuring that high-value, sensitive information, such as intellectual property, personally identifiable information, and company financials, remain on lockdown wherever it resides. It means that only approved users and processes should be able to use the data.

This is where encryption comes in. To fully unshare data, organizations need to encrypt everything. Here are three steps on how to unshare and protect sensitive data through encryption:

  1. Locate sensitive data – Organizations need to identify where data resides in cloud and on-premise environments.
  2. Encrypt sensitive data – Security teams need to decide on the granular levels of data encryption to apply.
  3. Manage encryption keys – Security teams also need to manage and store keys for auditing and control.

Despite common myths surrounding data encryption, remember that its application ensures companies with the most returns by providing both data protection and authorized access. To know more about the value of unsharing your data and applying an encryption-centered security approach, you can read our ebook titled Unshare and Secure Sensitive Data – Encrypt Everything.

View the original post at Gemalto.com.

Wi-Fi 6 fundamentals: What is 1024-QAM?

October 25th, 2018

IDC sees Wi-Fi 6 (802.11ax) deployment ramping significantly in 2019 and becoming the dominant enterprise Wi-Fi standard by 2021. This is because many organizations still find themselves limited by the previous Wi-Fi 5 (802.11ac) standard. This is particularly the case in high-density venues such as stadiums, convention centres, transportation hubs, and auditoriums. With an expected four-fold capacity increase over its Wi-Fi 5 (802.11ac) predecessor, Wi-Fi 6 (802.11ax) is successfully transitioning Wi-Fi from a ‘best-effort’ endeavour to a deterministic wireless technology that is fast becoming the de-facto medium for internet connectivity.

Wi-Fi 6 (802.11ax) access points (APs) deployed in dense device environments such as those mentioned above support higher service-level agreements (SLAs) to more concurrently connected users and devices – with more diverse usage profiles. This is made possible by a range of technologies that optimize spectral efficiency, increase throughput and reduce power consumption. These include 1024- Quadrature Amplitude Modulation (QAM), Target Wake Time (TWT), Orthogonal Frequency-Division Multiple Access (OFDMA), BSS Coloring and MU-MIMO.

In this article, we’ll be taking a closer look at 1024-QAM and how Wi-Fi 6 (802.11ax) wireless access points can utilize this mechanism to significantly increase throughput.

1024-QAM

Quadrature amplitude modulation (QAM) is a highly developed modulation scheme used in the communication industry in which data is transmitted over radio frequencies. For wireless communications, QAM is a signal in which two carriers (two sinusoidal waves) shifted in phase by 90 degrees (a quarter out of phase) are modulated and the resultant output consists of both amplitude and phase variations. These variations form the basis for the transmitted binary bits, atoms of the digital world, that results in the information we see on our devices.


Two sinusoidal waves shifted by 90 degrees

By varying these sinusoidal waves through phase and amplitude, radio engineers can construct signals that transmit an ever-higher number of bits per hertz (information per signal). Systems designed to maximize spectral efficiency care a great deal about bits/hertz efficiency and thus are always employing techniques to construct ever denser QAM constellations to increase data rates. Put simply, higher QAM levels increase throughput capabilities in wireless devices. By varying the amplitude of the signal as well as the phase, Wi-Fi radios are able to construct the following constellation diagram that shows the values associated with the different states for a 16 QAM signal.

16-QAM constellation example

While the older Wi-Fi 5 (802.11ac) standard is limited to 256-QAM, the new Wi-Fi 6 (802.11ax) standard incorporates an extremely high optional modulation scheme (1024-QAM), with each symbol (a point on the constellation diagram) encoding a larger number of data bits when using a dense constellation. In real-world terms, 1024-QAM enables a 25% data rate increase (throughput) in Wi-Fi 6 (802.11ax) access points and devices. With over 30 billion connected “things” expected by 2020, higher wireless throughput facilitated by 1024-QAM is critical to ensuring quality-of-service (QoS) in high-density locations such as stadiums, convention centres, transportation hubs, and auditoriums. Indeed, applications such as 4K video streaming (which is becoming the norm) are expected to drive internet traffic to 278,108 petabytes per month by 2021.

Ensuring fast and reliable Wi-Fi coverage in high-density deployment scenarios with older Wi-Fi 5 (802.11ac) APs is increasingly difficult as streaming 4K video and AR/VR content becomes the norm. This is precisely why the new Wi-Fi 6 (802.11ax) standard offers up to a four-fold capacity increase over its Wi-Fi 5 (802.11ac) predecessor. With Wi-Fi 6 (802.11ax), multiple APs deployed in dense device environments can collectively deliver required quality-of-service to more clients with more diverse usage profiles.

This is made possible by a range of technologies – such as 1024-QAM – which enables a 25% data rate increase (throughput) in Wi-Fi 6 (802.11ax) access points and devices. From our perspective, Wi-Fi 6 (802.11ax) is playing a critical role in helping Wi-Fi evolve into a collision-free, deterministic wireless technology that dramatically increases aggregate network throughput to address high-density venues and beyond. Last, but certainly not least, Wi-Fi 6 (802.11ax) access points are also expected to help extend the Wi-Fi deployment cycle by providing tangible benefits for legacy wireless devices.

View the original post by Dennis Huang at The Ruckus Room.

Net-Ctrl Blog

Multi-Gigabit Use Cases

November 9th, 2018

These days, most access switches and end-user devices have 1 GbE ports, which are plentiful, highly competitive and affordable. Though currently a minority, the number of access points with 2.5 Gigabit Ethernet ports to support 802.11ac access points (APs) is increasing. Indeed, there is a range of devices – both on the market and those anticipated to launch – that support Ethernet switches with 2.5 GbE ports.

Unsurprisingly, switches with 2.5 GbE ports cost more than those with 1 GbE ports. Ruckus offers 2.5 GbE switches at a modest premium, although many other vendors sell 2.5 GbE, 5 GbE and 10 GbE ports that are more expensive and generally overkill for 802.11ac (Wi-Fi 5). Many 802.11ax (Wi-Fi 6) APs hitting the market will feature 5 GbE ports, although there are still few other devices expected to support 5 GbE.

When to use multi-gigabit connectivity

10 GbE Ethernet – which was part of the original 802.3bz standard – is primarily used for servers, storage and other devices in the data center. There are very few end-user devices that support 10GbE. However, more and more devices, such as laptops, point of sale units and video cameras are losing their tethers and moving to wireless connectivity. This increases the data load on wireless networks and drives the primary use case for 2.5 GbE and 5 GbE, as well as a new generation of access points. Multi-gigabit connectivity should be considered as organizations move to 802.11ac (Wi-Fi 5) and 802.11ax (Wi-Fi 6) and start implementing the next generation of Wi-Fi networks.

There are additional features to consider that go hand-in-hand with multigigabit connections, such as Power over Ethernet requirements (PoE) and future growth expectations. Indeed, it is important to understand the PoE power requirements for a new generation of access points equipped with multi-gigabit ports. Early APs routinely operated on PoE, consuming just 15 watts of power at the switch. However, more powerful radios consume more power. Even so, most APs today can still be powered by PoE or PoE+, the latter of which feeds 30 watts to the AP. However, while the latest 802.11ac (Wi-Fi 5) APs can operate on 30 watts of power, many need just a little more to achieve top performance – to drive all the radios and provide power to the USB port.

The newest generation of 802.11ax (Wi-Fi 6) APs is likely to require even more power than their predecessors. While 802.11ax (Wi-Fi 6) APs will operate on PoE+ power, they will demand more power to drive 8×8 radios and achieve peak performance. A new standard known as 802.3bt is expected to address the PoE requirements for 802.11ax (Wi-Fi 6) APs, as well as for devices such as LED lighting, pan-tilt-zoom (PTZ) cameras and HDTVs. 802.3bt – which incorporates both 60 watts and 90 watts of power per port – was ratified by the IEEE in September 2018. Organizations planning to deploy new switches with multi-gigabit connectivity should make sure they deliver sufficient PoE to support newer APs.

It should also be noted that there are detailed specifications for connections running at more than one gigabit per second over standard twisted-pair copper cabling. It is therefore important to understand the requirements and how they match existing cabling. The IEEE modified the 802.3bz standard in 2016 to add 2.5 gigabits and five gigabit Ethernet over twisted pair wiring. This was done specifically to support connecting new generations of Wi-Fi over copper without having to move to fiber optics.

The type of cabling that is required – both for one gigabit and 2.5 gigabit – can run over Cat 5e cabling for up to 100 meters. However, five gigabits per second requires Cat 6 cabling to run up to 100 meters and 10 gigabits per second requires Cat 6a. A significant number of buildings still only have Cat 5e cabling, in which case supporting faster speeds would require re-cabling a property. In practical terms, this means organizations should check the type of cabling currently installed in their buildings when considering an upgrade to multi-gigabit. If new cabling is required, organizations should be sure to calculate the upgrade costs and determine if moving to multi-gigabit is worth the expense.

Organizations should also be sure to understand the life-cycle of their infrastructure. More specifically, Wi-Fi standards, equipment, and gigabit usage are growing so rapidly that companies and organizations are refreshing their Wi-Fi access points approximately every three years. However, the switch lifecycle averages closer to five to seven years for commercial enterprises – and up to seven to ten years for the education market. So, organizations should ensure that new switch purchases will support current Wi-Fi networks and at least one more refresh cycle, if not more. During this period, they will see more users, more devices per users and a greater demand for throughput generated by streaming audio and video. Put simply, future-proofing switching is essential to protecting any network infrastructure investment.

View the original blog post by Rick Freedman at The Ruckus Room.

Cloud Security: How to Secure Your Sensitive Data in the Cloud

November 9th, 2018

In today’s always-connected world, an increasing number of organisations are moving their data to the cloud for operational efficiency, cost management, agility, scalability, etc.

As more data is produced, processed, and stored in the cloud – a prime target for cybercriminals who are always lurking around to lay their hands on organisations’ sensitive data – protecting the sensitive data that resides on the cloud becomes imperative.

While most Cloud Service Providers (CSPs) have already deployed strong front line defence systems like firewalls, anti-virus, anti-malware, cloud-security intrusion detection, etc. to thwart malicious attacks, sophisticated hackers are breaching them with surprising ease today. And once a hacker gains an inside entry by breaching the CSP’s perimeter defences, there is hardly anything that can be done to stop him from accessing an organisation’s sensitive data. Which is why more and more organisations are encrypting their cloud data today as a critical last line of defence against cyber attacks.

Data Encryption Is Not Enough

While data encryption definitely acts as a strong deterrence, merely encrypting the data is not enough in today’s perilous times where cyber attacks are getting more sophisticated with every passing day. Since the data physically resides with the CSP, it is out of the direct control of the organisations that own the data.

In a scenario like this where organisations encrypt their cloud data, storing the encryption keys securely and separately from the encrypted data is of paramount importance.

Enter BYOK

To ensure optimal protection of their data in the cloud, an increasing number of organisations are adopting a Bring Your Own Key (BYOK) approach that enables them to securely create and manage their own encryption keys, separate from the CSP’s where their sensitive data is being hosted.

However, as more encryption keys are created for an increasing number of cloud environments like Microsoft Azure, Amazon Web Services (AWS), Salesforce, etc., efficiently managing the encryption keys of individual cloud applications and securing the access, becomes very important. Which is why many organisations use External Key Management (EKM) solutions to cohesively manage all their encryption keys in a secure manner that is bereft of any unauthorised access.

Take the example of Office 365, Microsoft’s on-demand cloud application that is widely used by organisations across the globe to support employee mobility by facilitating anytime, anywhere access to Microsoft’s email application – MS Outlook and business utility applications like MS Word, Excel, PowerPoint, etc.

Gemalto’s BYOK solutions (SafeNet ProtectApp and SafeNet KeySecure) for Office 365 not only ensure that organisations have complete control over their encrypted cloud data but also seamlessly facilitate efficient management of the encryption keys of other cloud applications like Azure, AWS, Google Cloud and Salesforce.

Below is a quick snapshot of how SafeNet ProtectApp and SafeNet KeySecure seamlessly work with Azure BYOK:

To elaborate, below is the step-by-step process of how this works:

  1. SafeNet ProtectApp and KeySecure are used to generate a RSA Key Pair or required Key size using the FIPS 140-2 certified RNG of KeySecure.
  2. A Self-SignedCertificateUtility.jar (which is a Java-based application) then interacts with KeySecure using a TLS-protected NAE service to fetch the Key Pair and create a Self-signed Certificate.
  3. The Key Pair and Self-signed Certificate are stored securely in a PFX or P12 container that encrypts the contents using a Password-based Encryption (PBE) Key.
  4. The PFX file (which is an encrypted container using a PBE Key) is then uploaded on Azure Key Vault using Azure Web API / Rest.
  5. The transmission of the PFX file to the Azure Key Vault is protected using security mechanisms implemented by Azure on their Web API (TLS / SSL, etc.).
  6. Since the PFX files will be located on the same system on which the SelfSignedCertificateUtility.jar utility will be executed, industry-best security practices like ensuring pre-boot approval, enabling two-factor authentication (2FA), etc. should be followed.
  7. Once the Keys are loaded on Azure Key Vault, all encryption operations happen on Azure platform itself.

Continue to find out what to consider when choosing a Key Management solution, as well as how Gemalto can support organisations to make their BYOK journey easier.

To Sum It Up

As technology evolves, so do cybercriminals, and merely encrypting data no longer guarantees foolproof data protection today. While encrypting their sensitive cloud data, organisations must bear in mind that securing and managing the encryption keys is as important as the encryption itself.

To prevent unauthorized access and ensure that the encryption keys don’t fall in the wrong hands, cybersecurity experts unanimously recommend the use of Hardware Security Module (HSM) devices to securely store the encryption keys.

Since encryption keys pass through multiple phases during their lifetime – like generation, storage, distribution, backup, rotation and destruction, efficiently managing these keys at each and every stage of their lifecycle becomes important. A secure and centralized key management solution is critical.

Gemalto’s SafeNet KeySecure offers organisations a robust centralized platform that seamlessly manages all encryption keys. Below are some key benefits that make SafeNet KeySecure a preferred choice for organisations across the globe:

  1. Heterogeneous key management – helps in seamlessly managing multiple encryption keys at each stage of their lifecycle.
  2. Logging and auditing – helps in storing audit trails that can be analyzed by using any leading SIEM tools.
  3. Centralized management console – helps in assigning administrator roles according to the scope of their responsibilities.
  4. High Interoperability – supports a broad ecosystem of respected technology partners using the OASIS KMIP standard
  5. Reduces the overall cost of data security by offering automated operations.

Learn more about how Gemalto’s suite of cloud security solutions can help your organisation fully secure your data in the cloud.

View the original article at Gemalto.com.

Take a number, we’ll be right with you: Wi-Fi connections and capacity

November 7th, 2018

Wi-Fi connects the world, one device at a time. Literally. One. Device. At. A. Time. Wi-Fi is a half-duplex technology. This means only one device gets to transmit. All other devices sharing that channel have to wait their turn to make wi-fi connections. Yet we talk about high capacity and how many devices an AP can support. What does that mean if the answer is always one?

When more than one device is connected to an AP, they must share the air. All other things being equal, the devices and the AP (it counts as a device too!) will take turns transmitting. You could easily have 10, 50, 100, or more devices connected to an AP. But each still has to wait for its turn to talk.

If you want to sound like a Wi-Fi pro, you’ll need to understand a few things about capacity: how many Wi-Fi connections an AP can keep track of, how devices are trying to talk simultaneously, and how fast each can talk.

You might have 100 devices connected to an AP, but if only 10 need to transmit at a given time, you don’t have to wait long for your turn. The other 90 devices stay connected and hang out until they have something to say.

Now, imagine you’ve got 500 devices connected and 250 want to talk simultaneously. That’s like being stuck in line at the restroom during a concert and there are 249 people ahead of you. Yikes.

If all of the devices are fast, your turn will come much more quickly: think of your 802.11ac smartphone versus Grandma’s old 802.11g laptop. No matter what you do, the phone will be capable of going faster than the laptop. But that doesn’t mean they will get the same performance on all APs.

Ruckus helps you wring every last bit of speed out of any device with innovations like BeamFlex+, transient client management, auto RF cell sizing, airtime decongestion, and much more. When you’ve got a network with lots of Wi-Fi devices (why, hello, IoT), any extra performance boosts can make a big difference.

Read the original report at The Ruckus Room.

Roam If You Want To: Moving Wi-Fi Devices Around the World

October 30th, 2018

Wi-Fi means mobility. Devices that can move around and be free of all that pesky cabling. Roaming, the ability for a device to move from one AP to another, is critical. There is nothing more frustrating than sitting underneath an AP—you can see it, the AP is right there—but you aren’t getting a strong signal. What to do?

First, let’s address a common misconception: who tells a Wi-Fi device to roam? Most people will say the AP. In fact, the device decides and there is very little the AP—or any other device—can do about it.

Usually, the client stays connected to its AP until the signal strength drops and becomes too low. How low? Well, that’s up to the device. And all devices are a little bit different. A few allow you to adjust the roaming threshold setting, but most do not. A client device that should roam but doesn’t is known as a sticky client.

But you’re still here, standing under an AP with a device reporting a low signal. How do you get that stubborn thing to roam?

Fortunately, there are some tricks available. The first is a standards-based method that tells the device about the network and helps it make a better decision. The IEEE 802.11k standard allows a device to ask an AP about the network, including how well the AP can hear it, and vice versa. It can even get a list of neighbouring APs instead of having to scan for itself. Kind of like using a dating service versus going to a bar to meet someone new! More importantly, it gives the device a much better idea of whether it’s time to move on or stick with its current AP.

Another standard, 802.11v, allows an AP to request–politely—that the device move and even give a list of suggested APs the device could move to. Sounds great!

The downside to both of these is that the AP and the device each need to support the standard. Some do, but not all.

We mentioned that roaming—the decision by a device to disconnect from an AP and connect to a new one—is a device decision. But there is a way that an AP can “force” a device to move: it can send a de-authentication message that disconnects the device. Of course, the device will automatically try to reconnect. As part of the reconnection process, it scans its surroundings and—Egad! I’m right under an AP!—and connect to the closer AP.

All Ruckus APs support this. As a matter of fact, we combine this concept of forcing a device to move along with some other intelligence around exactly when to use it, plus standards like 802.11k and 802.11r. We call it SmartRoam+. You can call it helping Wi-Fi devices roam more quickly and seamlessly.

There’s a lot more to device roaming and we’ll save that for another post. But in the meantime, you can use this to get your “stuck” devices moving again.

View the original post at The Ruckus Room.

Data Encryption: From “Good-to-Have” To “Must-Have”

October 30th, 2018

Whenever the news of any data breach surfaces, the first action of most organisations is to take an immediate stock of their IT perimeter defences and update them to avoid getting breached themselves.

While it is definitely a good strategy to ensure that perimeter defence systems like firewalls, antivirus, antimalware, etc. that act as the first line of defence is always kept updated, focusing only on these defence mechanisms is no longer sufficient in today’s perilous times where hackers are breaching organisations’ cybersecurity more frequently than ever before.

As per the H1 results of Gemalto’s 2018 Breach Level Index, more than 3.3 billion data files were breached across the globe in the first six months of 2018 alone. This figure marks an increase of a whopping 72% over those recorded for H1 2017! And unsurprisingly, more than 96% of these breaches occurred on data that was not encrypted.

The latest victim of data theft in India is Pune-based digital lending startup EarlySalary, who suffered a massive data breach in which the personal details, employment status and mobile numbers of its 20,000 potential customers were stolen. The company discovered the breach only after they received a ransom demand from the hackers, following which they plugged the vulnerability. While the company claimed that the attack was centred on one of its older landing pages, the damage was already done.

With rising cyber attacks such as these, organisations can no longer live under the illusion that once they deploy robust perimeter defence systems, they are safe. Whether it is an attack on startups like EarlySalary that may have rudimentary perimeter defences or conglomerates like Facebook, SingHealth and Equifax that most likely had deployed top-notch front-line defence systems, the common denominator between the data breaches at all these organisations is that they focused only on their front line defences (perimeter security) while ignoring their last line of defence – data encryption.

Secure the Data, Not Just the Systems

While perimeter security mechanisms indeed act as a strong deterrent against cyber attacks, they are rendered completely useless once hackers gain an inside access to an organisation’s data files.

Whether the data is at rest, or in motion (during transfer), encrypting it is perhaps the surest way of safeguarding it against malicious attacks. Since encryption makes it virtually impossible to decipher the data without the corresponding decryption key, hackers have zero incentive in breaching organisations that have encrypted their data.

Below are three steps that organisations need to take to ensure optimal data protection:

1. Locate sensitive data

First, identify where your most sensitive data files reside – audit your storage and file servers, applications, databases and virtual machines, along with the data that’s flowing across your network and between data centers.

2. Encrypt & Tokenize it

When choosing a data encryption solution, make sure that it meets two important objectives – protecting your sensitive data at each stage and tokenizing it.

Gemalto’s SafeNet Data Encryption Solutions not only encrypt data seamlessly at each stage (at rest and in motion) but also incorporate a proprietary Tokenization Manager that automatically generates a random surrogate value (also known as a Token or Reference Key) for each data file to avoid easy identification.

3. Safeguard and manage your crypto keys

To ensure zero-compromise of your data’s encryption keys, it is important that the keys are stored securely and separately from your encrypted data. Use of Hardware Security Modules (HSMs) is perhaps the surest way of ensuring optimal key security.

When choosing a HSM solution, make sure that the solution also facilitates key management to manage the crypto keys at each stage of their lifecycle – like generation, storage, distribution, backup, rotation, and destruction.

Gemalto’s SafeNet HSMs come with an in-built Key Management feature that cohesively provides a single, robust, centralized platform that seamlessly manages the crypto keys at each stage of their lifecycle.

5 Reasons Why Data Encryption Becomes a MUST

With cyber attacks on the rise with every passing day, the cybersecurity landscape across the globe has witnessed a tectonic shift in the last few years. First-line of defence mechanisms like perimeter security are no longer sufficient to prevent data breaches, since after an intrusion, there is hardly anything that can be done to protect the data that is not encrypted.

Realising this, Governments across the globe are introducing stringent regulations like the General Data Protection Regulation (GDPR), RBI’s Data Localisation, PCI-DSS and the upcoming Personal Data Protection Law, 2018 in India to ensure that organisations make adequate security provisions to protect their users’ confidential data.

Below are a few reasons why data encryption is no longer “good-to-have”, but “must-have” in today’s world:

1. Encryption Protects Data At All Times

Whether the data is at rest or in motion (transit), encryption protects it against all cyber attacks, and in the event of one, renders it useless to attackers.

2. Encryption Maintains Data Integrity

Cyber criminals don’t always breach an organisation’s cybersecurity to steal sensitive information. As seen in the case of the Madhya Pradesh e-Tender Scam, many a times they breach organisations to alter sensitive data for monetary gains. Encryption maintains data integrity at all times and immediately red flags any alterations to the data.

3. Encryption Protects Privacy

Encryption ensures safety of users’ private data, such as their personal data, while upholding and protecting the users’ anonymity and privacy, that reduces surveillance opportunities by governments or cyber criminals. This is one of the primary reasons why Apple strongly believes that encryption will only strengthen our protection against cyberattacks and terrorism.

4. Encryption Protects Data Across Devices

In today’s increasingly Bring Your Own Device (BYOD) world, data transfer between multiple devices and networks opens avenues for cyber attacks and data thefts. Encryption eliminates these possibilities and safeguards data across all devices and networks, even during transit.

5. Encryption Facilitates Regulatory Compliance

To safeguard users’ personal data, organisations across many industries have to comply with stringent data protection regulations like HIPAA, GDPR, PCIDSS, RBI Data Localisation, FIPS, etc. that are mandated by local regulators. Encryption assures optimal data protection and ensures regulatory compliance.

It’s time for a new data security mindset. Learn how Gemalto’s 3-step Secure the Breach approach can help your organisation secure your sensitive data from cyber-attacks.

For more information contact Net-Ctrl direct through our Contact Page, or call us direct on 01473 281 211.

View the original article by Gemalto.

Multi-gigabit is right here, right now

October 26th, 2018

I recently came across an interesting TechTarget article that discusses when an organization should upgrade to multi-gigabit (mGig) switches to support a new generation of 802.11ax access points (APs). As we’ve previously discussed here on the Ruckus Room, the IEEE 802.11ax (Wi-Fi 6) standard features multiple enhancements that enable access points to offer an expected four-fold capacity increase over its 802.11ac Wave 2 predecessor (Wi-Fi 5) in dense scenarios.

The introduction of 802.11ax (Wi-Fi 6) access points is certainly timely, as many organizations are already pushing the limits of the 802.11ac (Wi-Fi 5) standard, particularly in high-density venues such as stadiums, convention centers, transportation hubs, and auditoriums. Indeed, the proliferation of connected devices, along with 4K video streaming, is placing unprecedented demands on networks across the globe.

To accommodate the demand for increased capacity, some organizations have begun deploying 802.11ax (Wi-Fi 6) access points alongside existing 802.11ac (Wi-Fi 5) access points, with the former expected to become the dominant enterprise Wi-Fi standard by 2021. To take full advantage of the speeds offered by 802.11ax (Wi-Fi 6) APs (up to 5 gigabits per second), organizations have also begun installing multi-gigabit switches to either replace or supplement older infrastructure. This is because system administrators cannot ensure a quality user experience by simply upgrading one part (access points) of a network. To reap the benefits of 802.11ax (Wi-Fi 6) requires upgrades on the switch side as well.

The transition to multi-gigabit switches

It is important to emphasize that the transition to multi-gigabit switches does not necessarily require a wholesale infrastructure upgrade. It can happen gradually adding a few switches as needed. Furthermore, most multi-gigabit switches today include a mix of multi-gigabit and gigabit ports. Only those ports connected to 802.11ax (Wi-Fi 6) APs require multi-gigabit speeds, while the other gigabit ports are adequate for computers, printers, VoIP phones, cameras, and other Ethernet devices.

With the introduction of 802.11ax (Wi-Fi 6) starting now and the approaching avalanche of IoT connections, higher speed in the wired infrastructure is critical to prevent bottlenecks and maintain optimal network performance. I suggest that the transition to multi-gigabit switches should start now. With the average life for a switch being 5 to 7 years and up to 10 years for many institutions, the need for multi-gigabit connections will almost certainly be upon us within this timeframe.

Read the original post by Rick Freedman at The Ruckus Room.

Facing the Facebook Breach: Why Simple SSO is Not Enough

October 26th, 2018

Let’s ‘face’ it. The September 2018 Facebook breach was not only a ‘mega’ breach in terms of the 50 millions of compromised users affected, but also a severe breach due to the popularity of the social media giant. To recap, cyber criminals got ahold of users’ FB login credentials. The breach was compounded by the fact that many users utilize their Facebook credentials to log into other social media sites, which means that the hackers actually access not only to a user’s Facebook account, but to all other accounts that use Facebook login credentials.

SSO not a social media fashion – it’s an enterprise must

In essence, the Facebook credentials act as a simple, or eat all you want Single Sign On (SSO) for other social platforms. But the popularity of SSO solutions is not just a Facebook fashion. It’s a viable business need, meant for the convenience of organizations that need access to their day to day web and cloud-based applications. Simple Single Sign On offers clear advantages for enterprises: no need to maintain a separate set of passwords for each and every application, reduction of IT overload and password reset requests; increased productivity for employees, contractors and remote workers to authenticate once and access everything they need, any time and any place.

The demand for SSO in enterprises has grown with the rise in the number of web and cloud-based apps. However, along with wide SSO solution implementation has come the risk associated with simple SSO. Only a month before the Facebook breach, the potential ‘massive’ security dangers of Single Sign On was discussed at the USENIX conference in Baltimore. The paper describes how criminals can gain controls of numerous other web services when an account is hacked.

Google+ access to 3rd party apps now a minus

When it comes to third party app violations, Google has not been spared. Its “Project Strobe” revealed stark findings related to their third-party access API – Google+ users. Due to a bug, third party apps were granted access to profile information about users not marked public to begin with. As a result, Google recommended sunsetting Google+ for consumers, concentrating R&D efforts to better control for enterprises on what account data they can choose to share with each app. Apps will need to show requested permission, one at a time, within each dialog box as opposed to all requested permission in a single screen.

Smart SSO with role-based policies

The risks that consumers were exposed to as a result of buffet-style sign on in the Facebook case, also apply to the enterprise. Fortunately, there is a solution: To maintain the convenience of single sign on without compromising on security, enterprises can use Smart Single Sign-On. With a smart SSO solution such as Gemalto’s SafeNet Trusted Access, enterprises can define conditional access policies. These policies can restrict or alleviate access to various application, depending on the risk. For example, groups of users would be able to authenticate only once when working in the office, but have to re-enter their password or other form of 2FA (i.e. SMS, pattern-based code, hardware token, etc.) for more restricted access.

To help increase trust without canning the convenience of SSO applicable to most apps and scenarios, stepping up authentication post-SSO login is an advantage. Enterprises can choose their access controls for specific user groups, sensitive apps and contextual conditions by applying scenario-based access policies.

Trusted Identity as a Service Provider

Using access management, enterprises can federate dozens of cloud applications without unnecessary burdens on IT teams, while keeping in place the necessary protections.

With Smart SSO, the proliferation of cloud apps needs not lead to a feast of security breach reports. To learn more about the smart face of single sign-on, and prevent an iDaaS-ter (Identity as a Service disaster), download the fact sheet, Matching Risk Policies to User Needs with Access Management, read more about Identity as a Service or watch how Gemalto SafeNet single sign-on solutions work in the cloud.

View the original post at Gemalto.com.

Gemalto’s vision for next generation digital security

October 25th, 2018

Digital transformation is a term that we’ve all heard a lot over the last 10 years, often in the context of a specific industry or process. But it’s undeniable now that the entire world is going through a digital transformation that is touching every aspect of our lives – how we live, how we work and how we discover the wider world around us.

An increasingly digital world means an ever-increasing number of pieces of data being exchanged every time we use an online service or a connected device. There are already billions of these exchanges taking place every day, and it’s estimated that by 2025, there will be 50 times more individual digital interactions than there were in 2010. This data defines our online lives, so being able to trust it is critical. With expectations from enterprises and consumers growing, both in the amount of information we share and how it’s protected, the challenge is a significant one.

Traditional security is no longer enough

Breaches are growing every year, across all sectors, with British Airways and Air Canada among the most recent high profile victims. Our Breach Level Index has tracked the number of data records lost or stolen since 2013, and with an estimated 5 million more being added every day, the total should easily hit a staggering 10 billion before the end of this year.

Technology firms have borne the brunt of these breaches but everyone is a target, from entertainment to healthcare and even education. In the majority of cases, the main cause of the attacks is identity theft. And once inside the network the real damage comes from unencrypted data – shockingly, 96% of breaches involved unencrypted data that the hacker could easily profit from (particularly in the case of credit card details).

The ever-growing list of high profile breaches shows that traditional security solutions are reaching their limits. Faced with a worldwide digital transformation that doesn’t look like it is set to slow down, we need to deploy a new generation of digital security solutions. This next-generation security must help organizations verify users’ identities in a purely online context. It must also remove the need for people to remember hundreds of (weak) passwords and shouldn’t add intrusive security steps (which is why I see a growing role for biometrics and risk-based authentication). Finally, it needs to ensure that individuals’ digital privacy is respected and their data isn’t monetized – unless they’ve given their express permission. If not people will leave the service and regulators will come down on offenders with heavy fines.

The portfolio of security services that we have built up over the last decade has put us in a unique position to help Service Providers and Governments answer these challenges by providing trusted digital identities to combat ID theft, and protection for previously unencrypted data.

Next generation digital security

Our strategic goal is to help our customers protect their entire digital service cycle from sign-up to opt-out. This starts when a new user has to prove his or her identity to register for a service, after which they are delivered a strong digital ID in the form of a smartcard, a digital token or by using biometric data. When they log-in, we can authenticate them using multiple factors and modes – from risk-based analysis to facial recognition. When using the service, we can encrypt all data using key management techniques and hardware security modules. And when they leave, cryptographic account deletion means their data is unrecoverable.

We believe that there are four key pillars to next-generation digital security:

  • Open. We don’t believe in building walls around digital assets. To add value, people and data must be able to flow in an open, decentralized, federated yet secure, way.
  • Proven. It’s not enough to just say you’re an expert in security – you have to prove it, time and time again. Companies need measurable fraud reduction and liability management, and our long-term blue-chip customers are the best evidence of our capability here.
  • Hybrid. Security tools must be designed to work in the real world. That means data security must be flexible enough to deal with a mix of hybrid, on-premise and cloud IT environments.
  • Convenient. If security stops people from doing what they need to do, it’s failed. We’re providing smooth user experiences by leveraging technology like biometrics to help make authentication frictionless and invisible.

We’re proud to play our part in protecting the world’s data, and enabling organizations across the globe to have successful digital transformations. As you may have seen from the announcement by Thales of the proposed acquisition of Gemalto, they have the same view of the growing needs for digital security as we do. The plan is to keep Gemalto’s scope intact and coherent within a new global business unit at Thales; our activities would be combined with Thales assets and expertise in cybersecurity, data analytics and artificial intelligence, which would only increase our ability to fulfil this mission.

Interested in reading more on our vision for generation security?next-

This article originally appeared on Philippe Vallée’s LinkedIn profile.

Encryption and the Fight Against Threats from Emerging Data-Driven Technologies

October 25th, 2018

It has been a year after the massive breach on credit reporting giant Equifax, which exposed 143 million U.S. consumers to identity theft and other losses. Today, even more businesses are exposed to rapidly changing technologies that are hungry to produce, share, and distribute data. This blog explores the dangers of leaving high-value, sensitive information unprotected. It also provides a three-step approach against inevitable data breaches with encryption at its core.

After Equifax, Do Emerging Technologies Bring New Dilemmas?

Few things are more disappointing than high-impact disasters that could have been averted. When the credit reporting giant Equifax announced that it was breached on May 2017, 143 million U.S. consumers’ personally identifiable information (PII) were stolen. Further investigation revealed that Equifax not only failed to apply critical vulnerability patches and perform regular security reviews but it also stored sensitive information in plaintext without encryption.

The Equifax breach, the worst data breach in history, was preventable. The attack roots from a critical vulnerability in Apache Struts that has a patch released since March 2017, two months before the breach. There are multiple ways to defend against an inevitable breach that use zero-day vulnerabilities, and one of the strongest is to encrypt high-value, sensitive data at rest.

Every day, approximately 7 million records are lost or stolen because of data breaches. Majority of data on these breaches were unsecured or unencrypted. A global study on the state of payment data security revealed that only 43% of companies use encryption or tokenization at the point of sales.

Today’s IT security experts face new challenges. Small businesses and organizations of the same size as Equifax have started to implement high technology trends in the fields of democratized artificial intelligence (AI), digitalized ecosystems, do-it-yourself biohacking, transparently immersive experiences and ubiquitous infrastructure. As emerging technologies spread into more industries beyond banks and government agencies, the risk of another Equifax disaster grows closer. IT security teams need to ensure that sensitive data are protected wherever it resides.

Breaking Myths about Encryption

Encryption can cover threat scenarios across a broad variety of data types. Out of all recorded breaches since 2013, only 4% were secure breaches, or those where encryption was used. Yet businesses tend to bypass it for perimeter defences and other newer technologies because of common misconceptions.

Many decision makers regard encryption as a costly solution that only applies to businesses with hardware compliance requirements. Encryption services, however, have grown to offer scalable data solutions. Encryption empowers businesses with the choice to encrypt data on one or more of the following levels: application, file, databases, and virtual machines. Encrypting data from the source, managing keys, and limiting access controls assures that data is protected on both the cloud provider’s and data owner’s ends.

Encrypting data is a flexible investment that ensures high levels of security and compliance for the most number of businesses. A reliable encryption service can free businesses from worrying about data tampering, unauthorized access, unsecure data transfers, and compliance issues.

In an age of inevitable data breaches, encryption is a necessary security measure that can render data inaccessible to attackers or useless to illegal vendors.

The Value of ‘Unsharing’ Your Sensitive Data

Today’s businesses require data to be shared in more places, where they rest at constant risk of theft or malicious access. Relying on perimeter protection alone is a reactive solution that leaves data unprotected from unknown and advanced threats, such as targeted attacks, new malware, or zero-day vulnerabilities.

More organizations are migrating data to the cloud, enabling big data analysis, and granting access to potential intellectual property or personally identifiable information. It is vital for organizations to start ‘unsharing’ sensitive data. But what does it mean to unshare?

Unsharing data means ensuring that high-value, sensitive information, such as intellectual property, personally identifiable information, and company financials, remain on lockdown wherever it resides. It means that only approved users and processes should be able to use the data.

This is where encryption comes in. To fully unshare data, organizations need to encrypt everything. Here are three steps on how to unshare and protect sensitive data through encryption:

  1. Locate sensitive data – Organizations need to identify where data resides in cloud and on-premise environments.
  2. Encrypt sensitive data – Security teams need to decide on the granular levels of data encryption to apply.
  3. Manage encryption keys – Security teams also need to manage and store keys for auditing and control.

Despite common myths surrounding data encryption, remember that its application ensures companies with the most returns by providing both data protection and authorized access. To know more about the value of unsharing your data and applying an encryption-centered security approach, you can read our ebook titled Unshare and Secure Sensitive Data – Encrypt Everything.

View the original post at Gemalto.com.

Wi-Fi 6 fundamentals: What is 1024-QAM?

October 25th, 2018

IDC sees Wi-Fi 6 (802.11ax) deployment ramping significantly in 2019 and becoming the dominant enterprise Wi-Fi standard by 2021. This is because many organizations still find themselves limited by the previous Wi-Fi 5 (802.11ac) standard. This is particularly the case in high-density venues such as stadiums, convention centres, transportation hubs, and auditoriums. With an expected four-fold capacity increase over its Wi-Fi 5 (802.11ac) predecessor, Wi-Fi 6 (802.11ax) is successfully transitioning Wi-Fi from a ‘best-effort’ endeavour to a deterministic wireless technology that is fast becoming the de-facto medium for internet connectivity.

Wi-Fi 6 (802.11ax) access points (APs) deployed in dense device environments such as those mentioned above support higher service-level agreements (SLAs) to more concurrently connected users and devices – with more diverse usage profiles. This is made possible by a range of technologies that optimize spectral efficiency, increase throughput and reduce power consumption. These include 1024- Quadrature Amplitude Modulation (QAM), Target Wake Time (TWT), Orthogonal Frequency-Division Multiple Access (OFDMA), BSS Coloring and MU-MIMO.

In this article, we’ll be taking a closer look at 1024-QAM and how Wi-Fi 6 (802.11ax) wireless access points can utilize this mechanism to significantly increase throughput.

1024-QAM

Quadrature amplitude modulation (QAM) is a highly developed modulation scheme used in the communication industry in which data is transmitted over radio frequencies. For wireless communications, QAM is a signal in which two carriers (two sinusoidal waves) shifted in phase by 90 degrees (a quarter out of phase) are modulated and the resultant output consists of both amplitude and phase variations. These variations form the basis for the transmitted binary bits, atoms of the digital world, that results in the information we see on our devices.


Two sinusoidal waves shifted by 90 degrees

By varying these sinusoidal waves through phase and amplitude, radio engineers can construct signals that transmit an ever-higher number of bits per hertz (information per signal). Systems designed to maximize spectral efficiency care a great deal about bits/hertz efficiency and thus are always employing techniques to construct ever denser QAM constellations to increase data rates. Put simply, higher QAM levels increase throughput capabilities in wireless devices. By varying the amplitude of the signal as well as the phase, Wi-Fi radios are able to construct the following constellation diagram that shows the values associated with the different states for a 16 QAM signal.

16-QAM constellation example

While the older Wi-Fi 5 (802.11ac) standard is limited to 256-QAM, the new Wi-Fi 6 (802.11ax) standard incorporates an extremely high optional modulation scheme (1024-QAM), with each symbol (a point on the constellation diagram) encoding a larger number of data bits when using a dense constellation. In real-world terms, 1024-QAM enables a 25% data rate increase (throughput) in Wi-Fi 6 (802.11ax) access points and devices. With over 30 billion connected “things” expected by 2020, higher wireless throughput facilitated by 1024-QAM is critical to ensuring quality-of-service (QoS) in high-density locations such as stadiums, convention centres, transportation hubs, and auditoriums. Indeed, applications such as 4K video streaming (which is becoming the norm) are expected to drive internet traffic to 278,108 petabytes per month by 2021.

Ensuring fast and reliable Wi-Fi coverage in high-density deployment scenarios with older Wi-Fi 5 (802.11ac) APs is increasingly difficult as streaming 4K video and AR/VR content becomes the norm. This is precisely why the new Wi-Fi 6 (802.11ax) standard offers up to a four-fold capacity increase over its Wi-Fi 5 (802.11ac) predecessor. With Wi-Fi 6 (802.11ax), multiple APs deployed in dense device environments can collectively deliver required quality-of-service to more clients with more diverse usage profiles.

This is made possible by a range of technologies – such as 1024-QAM – which enables a 25% data rate increase (throughput) in Wi-Fi 6 (802.11ax) access points and devices. From our perspective, Wi-Fi 6 (802.11ax) is playing a critical role in helping Wi-Fi evolve into a collision-free, deterministic wireless technology that dramatically increases aggregate network throughput to address high-density venues and beyond. Last, but certainly not least, Wi-Fi 6 (802.11ax) access points are also expected to help extend the Wi-Fi deployment cycle by providing tangible benefits for legacy wireless devices.

View the original post by Dennis Huang at The Ruckus Room.