Archive for October, 2018

Roam If You Want To: Moving Wi-Fi Devices Around the World

Tuesday, October 30th, 2018

Wi-Fi means mobility. Devices that can move around and be free of all that pesky cabling. Roaming, the ability for a device to move from one AP to another, is critical. There is nothing more frustrating than sitting underneath an AP—you can see it, the AP is right there—but you aren’t getting a strong signal. What to do?

First, let’s address a common misconception: who tells a Wi-Fi device to roam? Most people will say the AP. In fact, the device decides and there is very little the AP—or any other device—can do about it.

Usually, the client stays connected to its AP until the signal strength drops and becomes too low. How low? Well, that’s up to the device. And all devices are a little bit different. A few allow you to adjust the roaming threshold setting, but most do not. A client device that should roam but doesn’t is known as a sticky client.

But you’re still here, standing under an AP with a device reporting a low signal. How do you get that stubborn thing to roam?

Fortunately, there are some tricks available. The first is a standards-based method that tells the device about the network and helps it make a better decision. The IEEE 802.11k standard allows a device to ask an AP about the network, including how well the AP can hear it, and vice versa. It can even get a list of neighbouring APs instead of having to scan for itself. Kind of like using a dating service versus going to a bar to meet someone new! More importantly, it gives the device a much better idea of whether it’s time to move on or stick with its current AP.

Another standard, 802.11v, allows an AP to request–politely—that the device move and even give a list of suggested APs the device could move to. Sounds great!

The downside to both of these is that the AP and the device each need to support the standard. Some do, but not all.

We mentioned that roaming—the decision by a device to disconnect from an AP and connect to a new one—is a device decision. But there is a way that an AP can “force” a device to move: it can send a de-authentication message that disconnects the device. Of course, the device will automatically try to reconnect. As part of the reconnection process, it scans its surroundings and—Egad! I’m right under an AP!—and connect to the closer AP.

All Ruckus APs support this. As a matter of fact, we combine this concept of forcing a device to move along with some other intelligence around exactly when to use it, plus standards like 802.11k and 802.11r. We call it SmartRoam+. You can call it helping Wi-Fi devices roam more quickly and seamlessly.

There’s a lot more to device roaming and we’ll save that for another post. But in the meantime, you can use this to get your “stuck” devices moving again.

View the original post at The Ruckus Room.

Data Encryption: From “Good-to-Have” To “Must-Have”

Tuesday, October 30th, 2018

Whenever the news of any data breach surfaces, the first action of most organisations is to take an immediate stock of their IT perimeter defences and update them to avoid getting breached themselves.

While it is definitely a good strategy to ensure that perimeter defence systems like firewalls, antivirus, antimalware, etc. that act as the first line of defence is always kept updated, focusing only on these defence mechanisms is no longer sufficient in today’s perilous times where hackers are breaching organisations’ cybersecurity more frequently than ever before.

As per the H1 results of Gemalto’s 2018 Breach Level Index, more than 3.3 billion data files were breached across the globe in the first six months of 2018 alone. This figure marks an increase of a whopping 72% over those recorded for H1 2017! And unsurprisingly, more than 96% of these breaches occurred on data that was not encrypted.

The latest victim of data theft in India is Pune-based digital lending startup EarlySalary, who suffered a massive data breach in which the personal details, employment status and mobile numbers of its 20,000 potential customers were stolen. The company discovered the breach only after they received a ransom demand from the hackers, following which they plugged the vulnerability. While the company claimed that the attack was centred on one of its older landing pages, the damage was already done.

With rising cyber attacks such as these, organisations can no longer live under the illusion that once they deploy robust perimeter defence systems, they are safe. Whether it is an attack on startups like EarlySalary that may have rudimentary perimeter defences or conglomerates like Facebook, SingHealth and Equifax that most likely had deployed top-notch front-line defence systems, the common denominator between the data breaches at all these organisations is that they focused only on their front line defences (perimeter security) while ignoring their last line of defence – data encryption.

Secure the Data, Not Just the Systems

While perimeter security mechanisms indeed act as a strong deterrent against cyber attacks, they are rendered completely useless once hackers gain an inside access to an organisation’s data files.

Whether the data is at rest, or in motion (during transfer), encrypting it is perhaps the surest way of safeguarding it against malicious attacks. Since encryption makes it virtually impossible to decipher the data without the corresponding decryption key, hackers have zero incentive in breaching organisations that have encrypted their data.

Below are three steps that organisations need to take to ensure optimal data protection:

1. Locate sensitive data

First, identify where your most sensitive data files reside – audit your storage and file servers, applications, databases and virtual machines, along with the data that’s flowing across your network and between data centers.

2. Encrypt & Tokenize it

When choosing a data encryption solution, make sure that it meets two important objectives – protecting your sensitive data at each stage and tokenizing it.

Gemalto’s SafeNet Data Encryption Solutions not only encrypt data seamlessly at each stage (at rest and in motion) but also incorporate a proprietary Tokenization Manager that automatically generates a random surrogate value (also known as a Token or Reference Key) for each data file to avoid easy identification.

3. Safeguard and manage your crypto keys

To ensure zero-compromise of your data’s encryption keys, it is important that the keys are stored securely and separately from your encrypted data. Use of Hardware Security Modules (HSMs) is perhaps the surest way of ensuring optimal key security.

When choosing a HSM solution, make sure that the solution also facilitates key management to manage the crypto keys at each stage of their lifecycle – like generation, storage, distribution, backup, rotation, and destruction.

Gemalto’s SafeNet HSMs come with an in-built Key Management feature that cohesively provides a single, robust, centralized platform that seamlessly manages the crypto keys at each stage of their lifecycle.

5 Reasons Why Data Encryption Becomes a MUST

With cyber attacks on the rise with every passing day, the cybersecurity landscape across the globe has witnessed a tectonic shift in the last few years. First-line of defence mechanisms like perimeter security are no longer sufficient to prevent data breaches, since after an intrusion, there is hardly anything that can be done to protect the data that is not encrypted.

Realising this, Governments across the globe are introducing stringent regulations like the General Data Protection Regulation (GDPR), RBI’s Data Localisation, PCI-DSS and the upcoming Personal Data Protection Law, 2018 in India to ensure that organisations make adequate security provisions to protect their users’ confidential data.

Below are a few reasons why data encryption is no longer “good-to-have”, but “must-have” in today’s world:

1. Encryption Protects Data At All Times

Whether the data is at rest or in motion (transit), encryption protects it against all cyber attacks, and in the event of one, renders it useless to attackers.

2. Encryption Maintains Data Integrity

Cyber criminals don’t always breach an organisation’s cybersecurity to steal sensitive information. As seen in the case of the Madhya Pradesh e-Tender Scam, many a times they breach organisations to alter sensitive data for monetary gains. Encryption maintains data integrity at all times and immediately red flags any alterations to the data.

3. Encryption Protects Privacy

Encryption ensures safety of users’ private data, such as their personal data, while upholding and protecting the users’ anonymity and privacy, that reduces surveillance opportunities by governments or cyber criminals. This is one of the primary reasons why Apple strongly believes that encryption will only strengthen our protection against cyberattacks and terrorism.

4. Encryption Protects Data Across Devices

In today’s increasingly Bring Your Own Device (BYOD) world, data transfer between multiple devices and networks opens avenues for cyber attacks and data thefts. Encryption eliminates these possibilities and safeguards data across all devices and networks, even during transit.

5. Encryption Facilitates Regulatory Compliance

To safeguard users’ personal data, organisations across many industries have to comply with stringent data protection regulations like HIPAA, GDPR, PCIDSS, RBI Data Localisation, FIPS, etc. that are mandated by local regulators. Encryption assures optimal data protection and ensures regulatory compliance.

It’s time for a new data security mindset. Learn how Gemalto’s 3-step Secure the Breach approach can help your organisation secure your sensitive data from cyber-attacks.

For more information contact Net-Ctrl direct through our Contact Page, or call us direct on 01473 281 211.

View the original article by Gemalto.

Multi-gigabit is right here, right now

Friday, October 26th, 2018

I recently came across an interesting TechTarget article that discusses when an organization should upgrade to multi-gigabit (mGig) switches to support a new generation of 802.11ax access points (APs). As we’ve previously discussed here on the Ruckus Room, the IEEE 802.11ax (Wi-Fi 6) standard features multiple enhancements that enable access points to offer an expected four-fold capacity increase over its 802.11ac Wave 2 predecessor (Wi-Fi 5) in dense scenarios.

The introduction of 802.11ax (Wi-Fi 6) access points is certainly timely, as many organizations are already pushing the limits of the 802.11ac (Wi-Fi 5) standard, particularly in high-density venues such as stadiums, convention centers, transportation hubs, and auditoriums. Indeed, the proliferation of connected devices, along with 4K video streaming, is placing unprecedented demands on networks across the globe.

To accommodate the demand for increased capacity, some organizations have begun deploying 802.11ax (Wi-Fi 6) access points alongside existing 802.11ac (Wi-Fi 5) access points, with the former expected to become the dominant enterprise Wi-Fi standard by 2021. To take full advantage of the speeds offered by 802.11ax (Wi-Fi 6) APs (up to 5 gigabits per second), organizations have also begun installing multi-gigabit switches to either replace or supplement older infrastructure. This is because system administrators cannot ensure a quality user experience by simply upgrading one part (access points) of a network. To reap the benefits of 802.11ax (Wi-Fi 6) requires upgrades on the switch side as well.

The transition to multi-gigabit switches

It is important to emphasize that the transition to multi-gigabit switches does not necessarily require a wholesale infrastructure upgrade. It can happen gradually adding a few switches as needed. Furthermore, most multi-gigabit switches today include a mix of multi-gigabit and gigabit ports. Only those ports connected to 802.11ax (Wi-Fi 6) APs require multi-gigabit speeds, while the other gigabit ports are adequate for computers, printers, VoIP phones, cameras, and other Ethernet devices.

With the introduction of 802.11ax (Wi-Fi 6) starting now and the approaching avalanche of IoT connections, higher speed in the wired infrastructure is critical to prevent bottlenecks and maintain optimal network performance. I suggest that the transition to multi-gigabit switches should start now. With the average life for a switch being 5 to 7 years and up to 10 years for many institutions, the need for multi-gigabit connections will almost certainly be upon us within this timeframe.

Read the original post by Rick Freedman at The Ruckus Room.

Facing the Facebook Breach: Why Simple SSO is Not Enough

Friday, October 26th, 2018

Let’s ‘face’ it. The September 2018 Facebook breach was not only a ‘mega’ breach in terms of the 50 millions of compromised users affected, but also a severe breach due to the popularity of the social media giant. To recap, cyber criminals got ahold of users’ FB login credentials. The breach was compounded by the fact that many users utilize their Facebook credentials to log into other social media sites, which means that the hackers actually access not only to a user’s Facebook account, but to all other accounts that use Facebook login credentials.

SSO not a social media fashion – it’s an enterprise must

In essence, the Facebook credentials act as a simple, or eat all you want Single Sign On (SSO) for other social platforms. But the popularity of SSO solutions is not just a Facebook fashion. It’s a viable business need, meant for the convenience of organizations that need access to their day to day web and cloud-based applications. Simple Single Sign On offers clear advantages for enterprises: no need to maintain a separate set of passwords for each and every application, reduction of IT overload and password reset requests; increased productivity for employees, contractors and remote workers to authenticate once and access everything they need, any time and any place.

The demand for SSO in enterprises has grown with the rise in the number of web and cloud-based apps. However, along with wide SSO solution implementation has come the risk associated with simple SSO. Only a month before the Facebook breach, the potential ‘massive’ security dangers of Single Sign On was discussed at the USENIX conference in Baltimore. The paper describes how criminals can gain controls of numerous other web services when an account is hacked.

Google+ access to 3rd party apps now a minus

When it comes to third party app violations, Google has not been spared. Its “Project Strobe” revealed stark findings related to their third-party access API – Google+ users. Due to a bug, third party apps were granted access to profile information about users not marked public to begin with. As a result, Google recommended sunsetting Google+ for consumers, concentrating R&D efforts to better control for enterprises on what account data they can choose to share with each app. Apps will need to show requested permission, one at a time, within each dialog box as opposed to all requested permission in a single screen.

Smart SSO with role-based policies

The risks that consumers were exposed to as a result of buffet-style sign on in the Facebook case, also apply to the enterprise. Fortunately, there is a solution: To maintain the convenience of single sign on without compromising on security, enterprises can use Smart Single Sign-On. With a smart SSO solution such as Gemalto’s SafeNet Trusted Access, enterprises can define conditional access policies. These policies can restrict or alleviate access to various application, depending on the risk. For example, groups of users would be able to authenticate only once when working in the office, but have to re-enter their password or other form of 2FA (i.e. SMS, pattern-based code, hardware token, etc.) for more restricted access.

To help increase trust without canning the convenience of SSO applicable to most apps and scenarios, stepping up authentication post-SSO login is an advantage. Enterprises can choose their access controls for specific user groups, sensitive apps and contextual conditions by applying scenario-based access policies.

Trusted Identity as a Service Provider

Using access management, enterprises can federate dozens of cloud applications without unnecessary burdens on IT teams, while keeping in place the necessary protections.

With Smart SSO, the proliferation of cloud apps needs not lead to a feast of security breach reports. To learn more about the smart face of single sign-on, and prevent an iDaaS-ter (Identity as a Service disaster), download the fact sheet, Matching Risk Policies to User Needs with Access Management, read more about Identity as a Service or watch how Gemalto SafeNet single sign-on solutions work in the cloud.

View the original post at

Gemalto’s vision for next generation digital security

Thursday, October 25th, 2018

Digital transformation is a term that we’ve all heard a lot over the last 10 years, often in the context of a specific industry or process. But it’s undeniable now that the entire world is going through a digital transformation that is touching every aspect of our lives – how we live, how we work and how we discover the wider world around us.

An increasingly digital world means an ever-increasing number of pieces of data being exchanged every time we use an online service or a connected device. There are already billions of these exchanges taking place every day, and it’s estimated that by 2025, there will be 50 times more individual digital interactions than there were in 2010. This data defines our online lives, so being able to trust it is critical. With expectations from enterprises and consumers growing, both in the amount of information we share and how it’s protected, the challenge is a significant one.

Traditional security is no longer enough

Breaches are growing every year, across all sectors, with British Airways and Air Canada among the most recent high profile victims. Our Breach Level Index has tracked the number of data records lost or stolen since 2013, and with an estimated 5 million more being added every day, the total should easily hit a staggering 10 billion before the end of this year.

Technology firms have borne the brunt of these breaches but everyone is a target, from entertainment to healthcare and even education. In the majority of cases, the main cause of the attacks is identity theft. And once inside the network the real damage comes from unencrypted data – shockingly, 96% of breaches involved unencrypted data that the hacker could easily profit from (particularly in the case of credit card details).

The ever-growing list of high profile breaches shows that traditional security solutions are reaching their limits. Faced with a worldwide digital transformation that doesn’t look like it is set to slow down, we need to deploy a new generation of digital security solutions. This next-generation security must help organizations verify users’ identities in a purely online context. It must also remove the need for people to remember hundreds of (weak) passwords and shouldn’t add intrusive security steps (which is why I see a growing role for biometrics and risk-based authentication). Finally, it needs to ensure that individuals’ digital privacy is respected and their data isn’t monetized – unless they’ve given their express permission. If not people will leave the service and regulators will come down on offenders with heavy fines.

The portfolio of security services that we have built up over the last decade has put us in a unique position to help Service Providers and Governments answer these challenges by providing trusted digital identities to combat ID theft, and protection for previously unencrypted data.

Next generation digital security

Our strategic goal is to help our customers protect their entire digital service cycle from sign-up to opt-out. This starts when a new user has to prove his or her identity to register for a service, after which they are delivered a strong digital ID in the form of a smartcard, a digital token or by using biometric data. When they log-in, we can authenticate them using multiple factors and modes – from risk-based analysis to facial recognition. When using the service, we can encrypt all data using key management techniques and hardware security modules. And when they leave, cryptographic account deletion means their data is unrecoverable.

We believe that there are four key pillars to next-generation digital security:

  • Open. We don’t believe in building walls around digital assets. To add value, people and data must be able to flow in an open, decentralized, federated yet secure, way.
  • Proven. It’s not enough to just say you’re an expert in security – you have to prove it, time and time again. Companies need measurable fraud reduction and liability management, and our long-term blue-chip customers are the best evidence of our capability here.
  • Hybrid. Security tools must be designed to work in the real world. That means data security must be flexible enough to deal with a mix of hybrid, on-premise and cloud IT environments.
  • Convenient. If security stops people from doing what they need to do, it’s failed. We’re providing smooth user experiences by leveraging technology like biometrics to help make authentication frictionless and invisible.

We’re proud to play our part in protecting the world’s data, and enabling organizations across the globe to have successful digital transformations. As you may have seen from the announcement by Thales of the proposed acquisition of Gemalto, they have the same view of the growing needs for digital security as we do. The plan is to keep Gemalto’s scope intact and coherent within a new global business unit at Thales; our activities would be combined with Thales assets and expertise in cybersecurity, data analytics and artificial intelligence, which would only increase our ability to fulfil this mission.

Interested in reading more on our vision for generation security?next-

This article originally appeared on Philippe Vallée’s LinkedIn profile.

Encryption and the Fight Against Threats from Emerging Data-Driven Technologies

Thursday, October 25th, 2018

It has been a year after the massive breach on credit reporting giant Equifax, which exposed 143 million U.S. consumers to identity theft and other losses. Today, even more businesses are exposed to rapidly changing technologies that are hungry to produce, share, and distribute data. This blog explores the dangers of leaving high-value, sensitive information unprotected. It also provides a three-step approach against inevitable data breaches with encryption at its core.

After Equifax, Do Emerging Technologies Bring New Dilemmas?

Few things are more disappointing than high-impact disasters that could have been averted. When the credit reporting giant Equifax announced that it was breached on May 2017, 143 million U.S. consumers’ personally identifiable information (PII) were stolen. Further investigation revealed that Equifax not only failed to apply critical vulnerability patches and perform regular security reviews but it also stored sensitive information in plaintext without encryption.

The Equifax breach, the worst data breach in history, was preventable. The attack roots from a critical vulnerability in Apache Struts that has a patch released since March 2017, two months before the breach. There are multiple ways to defend against an inevitable breach that use zero-day vulnerabilities, and one of the strongest is to encrypt high-value, sensitive data at rest.

Every day, approximately 7 million records are lost or stolen because of data breaches. Majority of data on these breaches were unsecured or unencrypted. A global study on the state of payment data security revealed that only 43% of companies use encryption or tokenization at the point of sales.

Today’s IT security experts face new challenges. Small businesses and organizations of the same size as Equifax have started to implement high technology trends in the fields of democratized artificial intelligence (AI), digitalized ecosystems, do-it-yourself biohacking, transparently immersive experiences and ubiquitous infrastructure. As emerging technologies spread into more industries beyond banks and government agencies, the risk of another Equifax disaster grows closer. IT security teams need to ensure that sensitive data are protected wherever it resides.

Breaking Myths about Encryption

Encryption can cover threat scenarios across a broad variety of data types. Out of all recorded breaches since 2013, only 4% were secure breaches, or those where encryption was used. Yet businesses tend to bypass it for perimeter defences and other newer technologies because of common misconceptions.

Many decision makers regard encryption as a costly solution that only applies to businesses with hardware compliance requirements. Encryption services, however, have grown to offer scalable data solutions. Encryption empowers businesses with the choice to encrypt data on one or more of the following levels: application, file, databases, and virtual machines. Encrypting data from the source, managing keys, and limiting access controls assures that data is protected on both the cloud provider’s and data owner’s ends.

Encrypting data is a flexible investment that ensures high levels of security and compliance for the most number of businesses. A reliable encryption service can free businesses from worrying about data tampering, unauthorized access, unsecure data transfers, and compliance issues.

In an age of inevitable data breaches, encryption is a necessary security measure that can render data inaccessible to attackers or useless to illegal vendors.

The Value of ‘Unsharing’ Your Sensitive Data

Today’s businesses require data to be shared in more places, where they rest at constant risk of theft or malicious access. Relying on perimeter protection alone is a reactive solution that leaves data unprotected from unknown and advanced threats, such as targeted attacks, new malware, or zero-day vulnerabilities.

More organizations are migrating data to the cloud, enabling big data analysis, and granting access to potential intellectual property or personally identifiable information. It is vital for organizations to start ‘unsharing’ sensitive data. But what does it mean to unshare?

Unsharing data means ensuring that high-value, sensitive information, such as intellectual property, personally identifiable information, and company financials, remain on lockdown wherever it resides. It means that only approved users and processes should be able to use the data.

This is where encryption comes in. To fully unshare data, organizations need to encrypt everything. Here are three steps on how to unshare and protect sensitive data through encryption:

  1. Locate sensitive data – Organizations need to identify where data resides in cloud and on-premise environments.
  2. Encrypt sensitive data – Security teams need to decide on the granular levels of data encryption to apply.
  3. Manage encryption keys – Security teams also need to manage and store keys for auditing and control.

Despite common myths surrounding data encryption, remember that its application ensures companies with the most returns by providing both data protection and authorized access. To know more about the value of unsharing your data and applying an encryption-centered security approach, you can read our ebook titled Unshare and Secure Sensitive Data – Encrypt Everything.

View the original post at

Wi-Fi 6 fundamentals: What is 1024-QAM?

Thursday, October 25th, 2018

IDC sees Wi-Fi 6 (802.11ax) deployment ramping significantly in 2019 and becoming the dominant enterprise Wi-Fi standard by 2021. This is because many organizations still find themselves limited by the previous Wi-Fi 5 (802.11ac) standard. This is particularly the case in high-density venues such as stadiums, convention centres, transportation hubs, and auditoriums. With an expected four-fold capacity increase over its Wi-Fi 5 (802.11ac) predecessor, Wi-Fi 6 (802.11ax) is successfully transitioning Wi-Fi from a ‘best-effort’ endeavour to a deterministic wireless technology that is fast becoming the de-facto medium for internet connectivity.

Wi-Fi 6 (802.11ax) access points (APs) deployed in dense device environments such as those mentioned above support higher service-level agreements (SLAs) to more concurrently connected users and devices – with more diverse usage profiles. This is made possible by a range of technologies that optimize spectral efficiency, increase throughput and reduce power consumption. These include 1024- Quadrature Amplitude Modulation (QAM), Target Wake Time (TWT), Orthogonal Frequency-Division Multiple Access (OFDMA), BSS Coloring and MU-MIMO.

In this article, we’ll be taking a closer look at 1024-QAM and how Wi-Fi 6 (802.11ax) wireless access points can utilize this mechanism to significantly increase throughput.


Quadrature amplitude modulation (QAM) is a highly developed modulation scheme used in the communication industry in which data is transmitted over radio frequencies. For wireless communications, QAM is a signal in which two carriers (two sinusoidal waves) shifted in phase by 90 degrees (a quarter out of phase) are modulated and the resultant output consists of both amplitude and phase variations. These variations form the basis for the transmitted binary bits, atoms of the digital world, that results in the information we see on our devices.

Two sinusoidal waves shifted by 90 degrees

By varying these sinusoidal waves through phase and amplitude, radio engineers can construct signals that transmit an ever-higher number of bits per hertz (information per signal). Systems designed to maximize spectral efficiency care a great deal about bits/hertz efficiency and thus are always employing techniques to construct ever denser QAM constellations to increase data rates. Put simply, higher QAM levels increase throughput capabilities in wireless devices. By varying the amplitude of the signal as well as the phase, Wi-Fi radios are able to construct the following constellation diagram that shows the values associated with the different states for a 16 QAM signal.

16-QAM constellation example

While the older Wi-Fi 5 (802.11ac) standard is limited to 256-QAM, the new Wi-Fi 6 (802.11ax) standard incorporates an extremely high optional modulation scheme (1024-QAM), with each symbol (a point on the constellation diagram) encoding a larger number of data bits when using a dense constellation. In real-world terms, 1024-QAM enables a 25% data rate increase (throughput) in Wi-Fi 6 (802.11ax) access points and devices. With over 30 billion connected “things” expected by 2020, higher wireless throughput facilitated by 1024-QAM is critical to ensuring quality-of-service (QoS) in high-density locations such as stadiums, convention centres, transportation hubs, and auditoriums. Indeed, applications such as 4K video streaming (which is becoming the norm) are expected to drive internet traffic to 278,108 petabytes per month by 2021.

Ensuring fast and reliable Wi-Fi coverage in high-density deployment scenarios with older Wi-Fi 5 (802.11ac) APs is increasingly difficult as streaming 4K video and AR/VR content becomes the norm. This is precisely why the new Wi-Fi 6 (802.11ax) standard offers up to a four-fold capacity increase over its Wi-Fi 5 (802.11ac) predecessor. With Wi-Fi 6 (802.11ax), multiple APs deployed in dense device environments can collectively deliver required quality-of-service to more clients with more diverse usage profiles.

This is made possible by a range of technologies – such as 1024-QAM – which enables a 25% data rate increase (throughput) in Wi-Fi 6 (802.11ax) access points and devices. From our perspective, Wi-Fi 6 (802.11ax) is playing a critical role in helping Wi-Fi evolve into a collision-free, deterministic wireless technology that dramatically increases aggregate network throughput to address high-density venues and beyond. Last, but certainly not least, Wi-Fi 6 (802.11ax) access points are also expected to help extend the Wi-Fi deployment cycle by providing tangible benefits for legacy wireless devices.

View the original post by Dennis Huang at The Ruckus Room.

Wi-Fi 6 fundamentals: Basic Service Set Coloring (BSS Coloring)

Thursday, October 25th, 2018

According to IDC, 802.11ax (Wi-Fi 6) deployment is projected to ramp significantly in 2019 and become the dominant enterprise Wi-Fi standard by 2021. This is because Wi-Fi 6 will deliver faster network performance and connect more devices simultaneously. Additionally, it will transition Wi-Fi from a ‘best-effort’ endeavour to a deterministic wireless technology that is now the de-facto medium for internet connectivity.

With a four-fold capacity increase over its 802.11ac (Wi-Fi 5) predecessor, Wi-Fi 6 deployed in dense device environments will support higher service-level agreements (SLAs) to more concurrently connected users and devices with more diverse usage profiles. This is made possible by a range of technologies that optimize spectral efficiency, increase throughput and reduce power consumption. These include BSS Coloring, Target Wake Time (TWT), Orthogonal Frequency-Division Multiple Access (OFDMA), 1024-QAM and MU-MIMO.

In this article, we’ll be taking a closer look at BSS Coloring and how Wi-Fi 6 wireless access points (APs) can utilize this mechanism to maximize network performance by decreasing co-channel interference and optimizing spectral efficiency in congested venues. These include high-density environments such as stadiums, convention centres, transportation hubs, and auditoriums.

BSS Coloring and Wi-Fi 6

Legacy high-density Wi-Fi deployments typically saw multiple access points assigned to the same transmission channels due to a limited amount of spectrum – an inefficient paradigm that contributed to network congestion and slowdowns. Moreover, legacy IEEE 802.11 devices were unable to effectively communicate and negotiate with each other to maximize channel resources. In contrast, Wi-Fi 6 access points are designed to optimize the efficient reuse of spectrum in dense deployment scenarios using a range of techniques, including BSS Coloring.

This mechanism intelligently ‘colour-codes’ – or marks – shared frequencies with a number that is included within the PHY header that is passed between the device and the network. In real-world terms, these colour codes allow access points to decide if the simultaneous use of spectrum is permissible because the channel is only busy and unavailable to use when the same colour is detected. This helps mitigate overlapping Basic Service Sets (OBSS). In turn, this enables a network to more effectively – and concurrently – transmit data to multiple devices in congested areas. This is achieved by identifying OBSS, negotiating medium contention and determining the most appropriate interference management techniques. Colouring also allows Wi-Fi 6 access points to precisely adjust Clear Channel Assessment (CCA) parameters, including energy (adaptive power) and signal detection (sensitivity thresholds) levels.

Designed for high-density connectivity, Wi-Fi 6 offers up to a four-fold capacity increase over its Wi-Fi 5 predecessor. With Wi-Fi 6, multiple APs deployed in dense device environments can collectively deliver required quality-of-service (QoS) to more clients with more diverse usage profiles. This is made possible by a range of technologies – such as BSS Coloring – which maximizes network performance by working even within heavily congested, co-channel interference environments. From our perspective, BSS Coloring will play a critical role in helping Wi-Fi evolve into a collision-free, deterministic wireless technology as the IEEE looks to integrate future iterations of the mechanism into new wireless standards to support the future of Wi-Fi and beyond.

View the original post at The Ruckus Room.

Got Six?

Friday, October 12th, 2018

No, I am not talking about six-pack abs or beer. I am talking about your Wi-Fi.

Not all Wi-Fi is created equal. Some are slow, but some are fast, even ludicrously fast – which matters a lot when downloading or uploading large files. Some don’t use the radio frequency (RF) spectrum efficiently, but some use this spectrum highly efficiently – very consequential since the spectrum is a scarce, precious resource. Some Wi-Fi access points can talk to only one device at a time, but some can talk to four devices at the same time, while there are some that can talk to even more at the same time – very relevant given the exponential growth in the number of Wi-Fi-capable devices in homes, offices, schools, etc.

The evolution of WiFi

Wi-Fi has gone through six generations over the last 25 years. Over these six generations, Wi-Fi speed and efficiency have improved by three orders of magnitude. The latest sixth-generation Wi-Fi, based on the 802.11ax standard, supports a maximum data rate of nearly 10 Gbps with spectral efficiency reaching 62.5 bps/Hz allowing concurrent transmissions to/from as many as 74 (using OFDMA in 160 MHz). Compare this to the initial 802.11 standard’s peak data rate of 2 Mbps and spectral efficiency of 0.1 bps/Hz. This evolution of Wi-Fi has largely been transparent to most end users. Although users experience better Wi-Fi while connecting to newer generations of Wi-Fi, there is no easy way for them to identify this unlike cellular, where one can simply look at the symbol in a corner of the phone to recognize whether they are connected using 3G, 4G, or other cellular technologies. Furthermore, when a phone shows E, which is displayed when the connection uses a 2.5G technology called EDGE, most users realize that it is going to take a long time to upload their photo to Facebook compare to when the phone shows 4G.

The emergence of Generational WiFi®

Wi-Fi Alliance (WFA) is attempting to change the lack of means to identify which generation of Wi-Fi a user is using through the introduction of a program called Generational Wi-Fi®. When fully rolled out, devices will display a number alongside the Wi-Fi signal strength bar indicating the generation of Wi-Fi technology in use. This makes Wi-Fi easier to understand and also helps set the right user experience expectations.

As an example, various generations of smartphones could at some point when connected to the latest generation access point display symbols summarized below.

When this happens, I would expect a call from my dad to ask me why his Wi-Fi is not 6. I am going to preempt this by shipping him a Ruckus R730 which has the sixth generation Wi-Fi 6 technology, with several new features, built in. When my dad does upgrade his phone he would only tell me, as he expected, that his Wi-Fi is SIX. And as he roams to different Wi-Fi hotspots, he may even comment that the Wi-Fi at the cricket stadium is better because it is 6, while it is not so good at the football stadium because it is 5. This is the power of a recognizable name.

Now, my question to you is: “got six?” Oh, did I tell you that R730 is not shipping just to friends and family, but is now generally available?

View the original post on

Yes, a Data Breach Is Inevitable: Here’s Why and What You Should Do

Thursday, October 11th, 2018

Why data encryption is your last line of defence in a data breach

The recent SingHealth breach is considered the worst attack in Singapore history, resulting in the loss of millions of private records and sensitive data. The leaked data not only affects SingHealth, but everyone else who’s had their data stolen. In this blog, we talk about why perimeter defence alone is not a foolproof solution in the event of a breach, and why you should shift your focus to accepting that a breach is inevitable.

Perimeter-based defences: No longer up to the task?

In recent years we’ve seen data breaches of various scales, ranging from small-time breaches to large-scale attacks like the recent SingHealth breach (1.5 million users), Facebook breach (87 million users), and the massive Equifax breach (146 million users) in 2017. Currently, the SingHealth breach is under investigation by the COI, signalling the complexity of such a large-scale attack.

A recent study revealed that hackers are 80% more likely to attack organizations in the Asia Pacific (APAC) region due to their cybersecurity infrastructure weaknesses. SingHealth joins the ranks of several other high-profile breaches seen in the region since 2016, making it a thriving environment for cybercrime, rife with low cybersecurity awareness and weak regulations

Deprioritizing cybersecurity is no longer an option. Companies are already taking the necessary steps to ensure that security measures are in place. With threats continuously facing security professionals every day, there has been much discussion of today’s traditional network security.

In a traditional network security setup, firewalls, antivirus software, and intrusion detection systems all work together and are designed to keep threats out. However, traditional network security prevention methods while necessary, may no longer be up to the task. Government contractors and software vendors have fallen victim to large-scale breaches, organizations with fewer IT resources start to wonder whether prevention and fortifying a strong perimeter is the best approach.

Belief vs. reality

In Gemalto’s 2017 Gemalto Data Security Confidence Index report, we found that 94% of businesses claim their perimeter security technology is efficient at keeping threats at bay and unauthorized users out of their network. In the same study, we also found that 65% of businesses are not extremely confident that their data would be secure following a breach. After all, employing perimeter-based security alone does not equate to an impenetrable wall surrounding a company’s IT infrastructure.

A change in (data) mindset

I had a conversation with an ethical hacker once, who told me why he prefers being a hacker instead of a security expert. He told me, “As hackers, we just need to succeed once. But as a security defence person, you have to succeed every time!”

Security has always been a game of prevention. But even with multiple layers of security, organizations still fall victim to attacks, proving the ineffectiveness of the perimeter defence without the other complementary layers of security. In fact, 91% of breaches start with phishing emails as the beginning of the infection chain as employees are successfully duped into clicking malicious links.

Suffice to say, even with an effective perimeter architecture, attacks can and will gain access to your data, aka, the ‘crown jewels’. Ironically, data security is an area that most organizations neglect the most, because they are making some of the biggest mistakes organizations make: assuming their defence will work as planned. Most organizations assume the person manning the network operations centre (NOC) and the security operations centre (SOC) won’t go on holiday that day the first alert comes in. They believe and trust that all the end user training they conducted won’t go down the drain. In an ideal world, all our expectations will line up perfectly with reality, but this is not often the case. Do we really want to take things for granted and face the music when a breach finally happens? Or do we want to prepared for it?

True cybersecurity awareness will assume at the onset that a breach is inevitable. We need to protect everything that’s truly vital to your organization and accept that the rest will be compromised.

What can organizations do then if we now realize a breach is inevitable?

“When your business is eventually breached, will your data be secure?”

At Gemalto, we assume that every business will be hacked at some point – and it will. And that’s why we have a 3 step approach to this.
Before it happens, we need you to ask yourself these three questions to help secure the breach.

1) “Where is my data?”

Knowing where your sensitive data lies is highly important. This is the first and most important step in any data security strategy. Once located, encrypt it.

Gemalto’s 1H 2018 Breach Level Index Report shows that 99% of all breaches involved data that was not encrypted. Encryption is the last and most critical line of defence in the event of a breach, so it’s important that it’s done properly in order to be effective.

2) “Where are the keys?”

Now that you’ve identified and encrypted your sensitive data, ask yourself where and how to secure your encryption keys. Knowing how to manage and store your encryption keys is the next step we recommend in securing the breach. This ensures your ownership and control over your encrypted data at all times.

3) “Who has access to my data?”

Data encryption and key management are nothing without identifying who has access to your corporate resources and applications. Key management and control access is the final step of your data breach strategy but a highly important one. Access management provides additional security, visibility, and overall convenience and to verify users’ identity to grant the appropriate access controls.

A multi-layered security approach will most definitely reduce your risk to exposing your sensitive data and those important data from falling into the cybercriminals’ hands. By implementing Gemalto’s 3-step approach—encrypting all sensitive data, securing your keys, and managing user access—you can effectively prepare for a breach.

Read Gemalto’s step-by-step guide on securing the breach. Contact Net-Ctrl if you’d like to hear more about how you can access your data security posture and how Gemalto helps keep data safe.

View the original post at