Hubs are pretty much obsolete these days due to price drops of switching technology.
The determining factor ultimately for dropping hubs is that hubs have one channel that is shared amongst all the ports.
Each port on a switch has its own channel for data transmission.
On the other hand, hubs are great for traffic 'sniffing'.
In a scenario where you have an unmanaged switch, you cannot sniff traffic from a specific port. For example: firewall WAN port.
If you throw a 'hub' between the (cable/dsl) modem and the firewall, a computer hooked up to any port on the hub can start sniffing traffic to and from the firewall.
This is useful for troubleshooting purposes or setting up Intrusion Detection Systems (IDS).

"Hardware vs Software"

Aside from the gateway / router of a computer network, another important piece of hardware is the "firewall".
I see a lot of small to medium sized businesses using "conventional" wireless routers such as Dlink, TP-Link and Linksys as their firewall.
With viruses, trojans, rootkits, malware becoming more complex and DDos and more aggresive, having a good firewall is pertinent to securing a network.
There are two types of firewall, namely: hardware and software.

Hardware firewalls are enterprise grade network appliances dedicated to filtering all incoming and outgoing traffic.
These firewalls have a high price tag and often enough a license is needed for every feature / module you want to activate.
The advantage that hardware firewalls such as Cisco ASA firewalls holds is hardware compatibility, a solid UNIX 'base' Operating System and an unbeatable firewall 'logic'.

Software firewalls on the other hand are more flexible as they are not bound to hardware.
An 'oversized' workstation with good resources (i.e CPU, Memory, Network Cards) can be configured to be a firewall.
Open-source software firewalls such as 'Pfsense', 'iptables' and 'ipfw' are very popular amongst Network Admins.
Technicians deploying hardware or software firewalls should have a good understanding of how a firewall, routing, access-lists, NAT works and a good knowledge of the TCP/IP protocol.

Conventional nor enterpise wireless routers are not to be used as firewalls for any business network although small offices do go down this route.
They do not have the necessary features, services or resources to function as the main gateway.
Most notably, wireless routers cannot handle 1.000.000 pps (packets per second), interface buffers are too low and therefore they will be overran during even low level attacks i.e SYN attack, Ping flood.
Wireless routers also do not have the ability to set up an outbound access-list to restrict outgoing connections.
This defeats their advertised 'plug-n-play' slogan.
With an open 'back-end' network, any rootkit/trojan/malware can freely communicate out to its host.
In the early years, this was the most common scenario of 'data leaks'.

Back some years, everyone went on a 'DD-WRT' craze, installing this open-source firmware on compatible routers.
DD-WRT gave users wanted features and the ability to customize their wireless router/gateway.
The problem with DD-WRT firmware is that it is 'buggy'.
Because the DD-WRT project is community driven, releases are not consistent and with the introduction of new features or fixes other security flaws are introduced.

"Not all switches are created equal"

Purchasing the proper hardware is of great importance when building, maintaining or upgrading a computer network.
Many technicians run into the store and grab a gigabit switch mostly from usage experience and price tag.
Problem with this is that it can introduce problems on a network.
Do some good analysis, figure out what you need (i.e port density, future upgrades/additions) before making a purchase.
Get extra ports than what actually is needed.
Extra ports mean a higher backplane switching capacity.
You'd think this counts for all switches and switch manufacturers but not all switches are created equal.
For example: TP-Link's TL-SG1008 has a switching capacity of 10Gb/s while Linksys' SE2800 has a switching capacity of 16Gb/s.
Both switches have (8) gigabit ports and no expansion slots.
A switch port in 'full' duplex mode can support 2Gb/s.
A good 8-port Gigabit switch therefore supports 8 x 2Gb/s = 16Gb/s.
The TL-SG1008D switch has a switching capacity that is somewhat 'lacking' in certain scenarios.
Both switches are good for SOHO applications that require good throughput for web browsing, emailing and VOIP.
For HD video streaming and large file transfers, a higher capacity switch is needed.
Nexxt ASFRM164U1, 16-port Ethernet switch has a total switching capacity of 32Gb/s
Before Gigabyte file sizes and HD streaming, 32Gb/s for concurrent data tranfer was enough.
Now with NAS IO (Input/Ouput) and HD video streaming from multiple devices, end users require more switching speed.

"Single Board Computing"

With technology going forward at a fast pace, the introduction of the 'Raspberry Pi' single board computer on Feb 29 2012, was a big achievement.
Not only did this little board bring computing to third world countries but its possibilities were limitless.
The Raspberry Pi can be set up as a computer, gaming station or media player but it can also be used for other more mission critical purposes.
Such purposes being: Firewall, VPN gateway, Proxy, Web / Samba / DDNS / DHCP / DNS server.

In an effort to add Quality of Service and centralize web browsing on an (8) computer network of a client, a second generation Raspberry Pi as a proxy server.
All of the network's web browsing ran successfully for 2 years through the 700Mhz, 512MB RAM and 32GB USB 2.0 flash drive file system without complaints.
During the 2 years in production, the Raspberry Pi was running (5) services, namely: proxy, web server, vpn, ddns and MRTG bandwidth monitoring.
MRTG monitoring was decommissioned early on as the Pi board can handle only so many I/O intensive services.
The Raspberry Pi functioned as a solid stepping stone to a bigger and better server.
All configuration files, scripts and OS tweaks were easily ported to new hardware.

Pi boards were successfully set up as a DDNS server for some clients for CCTV camera remote access.
As the Pi board requires only 5 volts and a minimum of 2 amps of current, its carbon footprint was perfect for clients that own a house in the Caribbean that is only used a few months out of the year.

Other clients required a site-to-site VPN connection to VPN services in the states to stream HD media from popular services such as Netflix, Hulu and Amazon that are not accessible from the Caribbean IP space.

Now with the much more powerful Raspberry Pi 3 out that has a quad-core 1.2Ghz CPU, 1GB of RAM, built-in wireless card, the possibilities are again limitless.

"Battery Backup, AVR, Surge Suppression & Daisy-chaining"

Walking into a lot of stores and other businesses, I cannot help but notice strange UPS scenarios.
When you have a lot of power outages like we have on the islands in the Caribbean, putting in a UPS is generally a good idea.
The general assumption amongst (young) technicians is that putting in a UPS protects your devices from power surges.
A UPS really only protects connected devices from power surges if it has a built-in Automatic Voltage Regulator (AVR).
Even so there have been reports that a UPS with surge protection lets through over 500 Volts of surge energy!
Surge energy over 400 Volts will cause degradation which leads to eventual failure.
Nowadays the market is being flooded by UPS manufacturers such as Forza and CDP.
Forza, from usage experience, has a bad charging module.
The battery in most models does not last throughout charges.
Even APC has a few models that only function as a "battery backup".
Devices plugged into a battery backup only UPS pull power 'direct' from the local power company.
Some UPS units have 'surge protection' ports.
The surge protection ports contain up to three MOV (Metal Oxide Varistor), that suppress power surges on L-N, L-G, N-G.
Problem with 'old' MOV technology is that it gets extremely hot during suppression events which can lead to fire.
Also when 'old' MOV technology fails, it does not give any indication.
Without any indication and MOV's not installed "in-line", when they fail, power surges will pass uninhibited through to appliances.

Power strips are sometimes daisy-chained to the UPS 'surge protection' ports.
Both the UPS and power strip will try to correct the fault current when a power surge happens.
They will interfere with each others suppression capabilities and let the fault current through, causing excessive current flow and a possible fire event.
Appliances such as microwaves, transformers, coffee machines and fans (inductive loads) cannot be plugged into a UPS or power strips.
LaserJet printers can generate 'surge backs' that can affect appliances plugged into a power strip.
Fluorescent light ballasts should never be wired into UPS systems.
When a ballast fails, it can create a 'ground' leak.
Server cabinets in server rooms and data centers are grounded to the same grid as the UPS.
When a ground leak is introduced, this will energize the cabinet which becomes an electrical hazzard for personnel and equipment.

"Listen before talk"

Due to many reasons, home and business users, when building a computer network tend to do everything over Wifi.
Before going down this path, installers / technicians should know:
- how many users.
- how many devices.
- type of devices.
- application (i.e file sharing / transfer) requirements.

The answers for the above mentioned questions will help determine what type of wireless device is or are needed.
Internet Service Providers nowadays configure and install modems with wireless capabilities for customers.
Because of this most home and business users do not see the need to buy an extra wireless device or devices.
Most ISP modems do not have good wireless range nor do they have enough resources (i.e CPU, memory, buffer size) to support all user traffic, namely:
- HD video streaming
- concurrent connections
- concurrent large file transfers
Because of limited resources, running all services (i.e DHCP, NAT, filtering, wireless encryption) is too taxing on ISP modems.
Off-loading some of these services to other devices helps alleviate the bottleneck at the modem.
Also what needs to be considered is the usage of the built-in switch ports.
Even though on some modems, the switch ports are gigabit, the built-in switch does not have enough switching capacity for (concurrent) gigabyte transfers.

Wireless devices work on a "listen before talk" principle.
A Wifi router/access-point can talk to one client at a given time on a single channel. This means all devices in the vicinity that are connected to the wifi router/access-point are waiting in line for their turn.
If the amount of devices exceeds the hardware threshold, the lag time will be so high that users will be left waiting.
This is were MIMO (Multiple In Multiple Out) technology comes in.
Newer routers have either 2 channels (MIMO 2x2) or 3 channels (MIMO 3x3) to chat with clients on.
802.11ac technology now uses MU-MIMO (Multi User, Multi In Multi Out) technology to service wireless clients faster.
The greatest improvement from MIMO to MU-MIMO is the bandwidth "pushed" to wireless devices.
MIMO technology "wastes" bandwidth by for example using a proposed 750Mbps for an iPhone 6 for browsing, emailing, VOIP meanwhile it only uses 433Mbps.
MU-MIMO has better bandwidth management providing only needed bandwidth.

"The Cloud"

What is "the cloud" ?
Everybody nowadays is talking about the cloud or moving their services (i.e website, email) to it.
Cloud technology is a combination of resource/services sharing and HTTP acceleration.
A cloud setup consists of a front and back-end.
Front-end servers, also known as 'nodes', are placed in and around the geographical areas, companies want to reach.
The nodes therefore help with DNS Round-Robin, Caching and website acceleration/proxying in regions that do not have proper high speed internet lines.
Connections from the nodes are forwarded to and from the back-end.
The back-end can be hosted at the customer's location or at a hosting provider.
Just as every system has a weakness, if the customer's back-end is exposed to the 'internet' and they nor the hosting provider have attack mitigation, the customer's service can be taken down as attacks will bypass the nodes and connection filters.
Since the emergence of the first DDos attacks in 1999, Cloud technology now provides the needed tools for eCommerce companies.
Cloud platforms such as Amazon AWS, Cloudflare, Cloudfront and Limelight not only deliver content, provide filtering, reduce bandwidth consumption, they can also absorb and mitigate attacks.
Most Caribbean providers can only 'blackhole' the IP of the attacked service to protect other clients on their network in order to stay true to their SLA (Service Level Agreement).
For this reason a lot of companies and government move their email, website and other services to hosting providers in the U.S and Europe.

" technology"

In 2014 at the Broadband World Forum in Amsterdam tech giants such as Broadcom, China's Triconductor Technology and Israeli startup Sckipio introduced their network equipment chip with technology that gives new life to DSL (Digital Subscriber Line). makes it possible for DSL to support 1Gb/s data transfer speed.
Deployment of the new tech was scheduled for 2016.

"Link Aggregation (LAG)"

Link Aggregation is the method of combining / bonding two or more interfaces in parallel order to increase throughput.
Switch manufacturers are now putting this feature in SOHO switches.
This is a very important feature to have in a 'managed' switch.
Computer gamers like bonding two gigabit network cards to get the increased throughput for gaming.
Do not be fooled, each network card can handle a maximum of 1Gb/s.
With LAG each switch port in the group, can handle 1Gb/s.
What LAG provides is a wider "lane" of 2, 3, 4, 6, 8Gb/s (depending on the amount of ports grouped together) for concurrent data trafficking.
Most servers with good resources (CPU, RAM, NIC, Bus speed) in a gigabit infrastructure can achieve at most 1/3 line speed (300Mb/s).
Desktop computers achieve less than 125Mb/s.
Therefore LAG does not provide much improvement in a gaming scenario.
A better scenario for LAG would be running a NAS, high I/O (Input/Output) File Server or having mutiple inter-connected switches.
Equipment manufacturer such as Netgear has LAG built-into its firmware of the 'Nighthawk X800'.
This greatly benefits users that run their entire network, including NAS off their wifi router.
It is of note that LAG will not improve internet usage.
Most ISP modems (Cable / DSL) do not support gigabit speeds through wireless nor copper.
Internet traffic is squeezed down to the connection speed between the end-user and ISP which can vary due to many factors.
LAG performance is therefore not carried all the way through to the ISP.

"20Mhz vs 40Mhz channel bandwidth"

Most routers since 2009 are pushed onto the market with firmware supporting 20/40 Mhz coexistence.
This does not mean though that you can just configure the 40Mhz channel bandwidth on the 2.4Ghz frequency specturm.
A wider range of channels in the 40Mhz bandwidth means overlapping channels and (more) interference with other routers/access-points.
In case the 40Mhz bandwidth is used on a 2.4Ghz radio, there is a hardcoded logic in the firmware that triggers a fallback to the 20Mhz bandwidth if:
1. there is a neighboring network using one or two channel(s), the configured router with 40Mhz bandwidth, is using.
2. the neighboring router is configured to use both 20/40Mhz in "coexistence" mode and the extra frame is detected by the router configured with the 40Mhz bandwidth.

Changing the channel bandwidth on a 2.4Ghz radio from 20 to 40Mhz does not improve speed.
In areas where there is a lot of 2.4Ghz congestion, changing the channel bandwith to 40Mhz will introduce less connection reliability.

Non-overlapping channels for 20Mhz broadcasting with 2.4Ghz are 1, 6, 11.
Non-overlapping channels for 40Mhz broadcasting with 2.4Ghz are 3, 11.

"Cat5, Cat5E or Cat6"

Good cabling compliments a network running proper hardware.
Ask an installer which network cable type they would recommend and the likely answer now is: Cat6
Does anybody really know why?
When is Cat5 or Cat5E appropriate?

Cable manufacturers put out their products labeled with frequencies: "100Mhz, 250Mhz, 350Mhz, 500Mhz".
What does this mean?
The higher the frequency, the more twists the cable pairs have.
It also means, the cables contain more copper to achieve all the twists.
With more bandwidth at the installers/technicians disposal, small installation mistakes can be tolerated when laying cables.
The frequency label is also of great importance for production companies using video equipment to transmit HD video.
Cable type and length is important in this scenario.
What is the relationship between 100Mhz and 100/1000Mb?
The output of Transmit/Receive (TX/RX) twisted pairs measured using an oscilloscope show that Ethernet data transmission has a frequency of about 62.5Mhz
Therefore Cat5(E) rated at 100Mhz is more than adequate for Fast ethernet (100Mb) and Gigabit (1000Mb).
With faster switching technology such as 10Gb Ethernet, Cat6 cabling rated at 250-500Mhz is required.
Conclusion: Unless you are getting ready for a 10Gb backbone, stick with Cat5(E).


In 1986, e-mail was typically downloaded to a recipient’s computer upon receipt and immediately deleted from the e-mail provider’s storage.
The Electronic Communications Privacy Act (ECPA) was written with this behaviour in mind.
It requires a search warrant to retrieve a message from an e-mail provider’s storage only if the message is less than 180 days old and provides for lower standards if the e-mail is left on the server for more than 180 days.
Today, however, e-mail is often both stored on and accessed from remote servers belonging to the e-mail provider, and many people “archive” their e-mail on their provider’s server rather than deleting old messages.
If a law enforcement official obtains information in violation of the law, that information usually cannot be used in court.
The same rule, however, does not apply to electronic information obtained in violation of ECPA.
Without an exclusionary rule, there is little deterrence against government overreaching.
Under the Foreign Intelligence Surveillance Act (FISA), the government need not show suspicion of wrongdoing, and it can conduct electronic and covert searches domestically if the target of these searches is “foreign-intelligence information” from a foreign power or an agent of a foreign power.
Unlike under ECPA, FISA surveillance orders are obtained from a secret court, the Foreign Intelligence Surveillance Court (FISC), and need not ever be made public.
Statute, 18 U.S.C. § 2709 of the USA Patriot Act requires wire or electronic communication service providers to hand over subscriber information, billing records, and “electronic communication transactional records” if the government certifies, without any judicial review, that the records are relevant to an investigation to protect against international terrorism or clandestine intelligence activities.
The statute allows the government to impose a gag on service providers to prevent the targeted subscribers from learning of the surveillance.
There are also Mutual Legal Assistance Treaties (MLAT) that allow countries to gather and share information.
Under ECPA, FISA and the USA Patriot Act, surveillance is done on U.S residents.
The result of this surveillance and government overreaching due to ongoing terrorism, money laundering is information also gathered from non-residents.

With all of this said, why would a technician or system administrator move a company’s data and/or emailing to for example: Microsoft OneDrive and Office 365?
Clients entrust companies with their personal, private information (i.e I.D / Passport copy, birth certificate, bank statements, utility bill, family tree, deeds) that should be safeguarded and not put on publicly accessible services that screen or hand over data to third parties.
Information leakage should be given a serious thought to avoid liability lawsuits.
As for “cloud” and other online services, re-think in-house services, local data centre setups, use of SSL / SMIME certificates.
Data is safeguarded and managed “privately” and communications generated from these services is only (encrypted) “carrier traffic”.


August of 1991 the world wide web went live.

Everyone accessed the world wide web through dial-up and as a result started using a plethora of applications to speed up browsing and downloading.

Nowadays most private computer networks utilise a “proxy” server to cache web browsing data.

A centralized system that forwards all (HTTP / HTTPS) network traffic to and from the internet.

Good were the days when every system administrator installed and configured Microsoft ISA server.

With a few point & clicks and the installation of their client on every machine, connection proxying was enabled.

Fast forward to today, most system administrators are going open-source with Squid web proxy as it is highly configurable and flexible.

There is no client application to install.

Only a WPAD ‘auto-detect’ script on a given web server in the domain or workgroup, making it very easy to deploy.

As a standalone proxy server, in the event of a server failure, minimal changes have to be made to send network traffic ‘direct’ through the firewall.

Configuration and backups are a breeze as settings are stored in a single or multiple plain-text files, protected by user/group permissions.

It is also of note that Squid web proxy is used by numerous companies on their ‘cloud’ platform to do HTTP acceleration.

Squid web proxy is also used by DDos mitigation companies such as ‘Prolexic’ to filter unwanted ‘bot’ traffic and it can be found packaged in the highly downloaded and used open-source distribution ‘Pfsense.



Companies should have clear policies on equipment access and what is expected of employees.

Employees attending to customers should not have unfeathered access to the internet at their workstation.

A company computer should only access work related websites and services instead of having open access.

Malicious java scripts & website code, downloaded media and unrestricted USB access are the reasons why company computers slow down, blue screen and applications fails.


Nowadays a proxy server is pretty much mandatory to not only monitor and throttle internet usage (i.e YouTube) but to detect and diagnose compromised systems.


A proper network design should have:

- HTTP / HTTPS web traffic going through a proxy.

- Open ports limited to only TCP 80, 443 and other needed ports only for the proxy.

- NAT set up per host and not per subnet.

- Outbound connection access-list assigned to the firewall LAN/inside interface.


In most cases a compromised system will transmit data out to its host through open ports on the firewall.

Most viruses, rootkits and backdoors do not use the systems configured proxy settings therefore with no NAT rule set up for workstations, a DIRECT connection through the firewall cannot be established.


With the above mentioned points, additional systems and tuning, company machines and data can be better safeguarded, bandwidth usage and management overhead reduced.



CCTV technology has been progressively going forward from CIF - D1 - 960H - 720P to 1080P HD quality.

MPEG technology has also been upgraded from h.264 to h.265.

MPEG h.265 provides twice the compression efficiency and better visual quality.

Even with all this, security software of DVR / NVR / Cameras (i.e coax, ip, body) have stayed behind.

CCTV security software (i.e firmware & server installation package) is still dependent on Microsoft, Internet Explorer and ActiveX controls.

In a developing world that is not ruled by Microsoft anymore, users that operate Apple, Android devices and other linux flavours (i.e Debian) have browser accessibility issues and therefore fallback on mobile apps.

Mobile apps on the other hand are not maintained and exhibit compatibility issues when device (i.e phone, tablet) software is updated.


Safari browser and Mozilla Firefox do not support ActiveX controls.

Google Chrome by default does not support ActiveX controls but with the addition of an extension it can be made to although not without problems.

Windows 8.1 and 10’s Internet Explorer supports only ActiveX controls for Flash Player updates.

Microsoft Edge does not support ActiveX controls.

With future flavours of Microsoft Windows, Internet Explorer will eventually be phased out.

The question to ask is when CCTV manufacturers will drop ActiveX controls and old frame types and start developing new, stable, flexible security software.

The common work arounds such as Microsoft virtual machines and browser add-ons are an annoyance and sometimes buggy.

With hacking activity (i.e ransomware) at an all-time high, running or keeping a computer with an outdated operating system, poses a security risk.



One of our motto’s is: “No project is too small”.

With technology moving forward at a fast pace, administrators, technicians and home users have to be on their toes when using the ‘Internet of Things’.

Identity theft is at an all-time high.

Ransomware has caused grief to a lot of countries around the world.

Botnets are becoming larger and more aggressive.



If perpetrators cannot gain access to a company’s infrastructure due to safeguards set up, they will try other avenues.

Hacking does not imply only digital.

People can also be hacked with a phone call and persuasion.

In the past we experienced first hand the following scenario:

Access to an executives email was gained.

One of his managers emailed details of his new internet connection at home, paid for by the company.

Amongst the details was his ‘fixed’ IP.

Perpetrators therefore hacked into his private network at home, gaining access to his work computer.

A ‘rootkit’ was planted.

Whenever the computer was on the company VPN, the perpetrators had access to the company’s internal network.


The company mail server was trusted so the mistake did not lay in the email that was sent.

Multiple mistakes were made but for the sake of this article the important mistake was in thinking that the managers home infrastructure did NOT have to be protected by the company’s help desk or network department.

The same care and planning that goes into an office/company infrastructure has to be put into home networks.

Hence our motto is: “No project is too small”.



With almost every device being put on the ‘Internet of Things’, swiss cheese is being made of security devices.

As mentioned in our “Proxy” note, proper network design is highly necessary to lessen the effects of viruses, trojans, spyware, malware etc.

A Google search provides a laundry list of viruses, trojans etc that communicate through almost the whole dynamic port range.

Uncontrolled ‘outbound’ access from home and company networks poses less administration overhead but also enables infected computers or devices to transmit out to ‘any’ host being the perpetrator or victim.

DDos attacks have become even larger mostly due to outdated or vulnerable routers such as Linksys, Netgear, TP-Link, D-link, Asus, Huawei etc.

Now CCTV recorders, including renowned manufacturers such as HIKVision and Dahua with vulnerable outdated firmware, unchanged default passwords and enabled default guest / user accounts can be added to the list.

Vulnerabilities such as a file upload and ‘write’ to recorders via telnet.

To generate a DDos attack, the comprised device has to just generate small constant traffic thus a recorder with a single ARM processor and Fast Ethernet controller is a valuable tool.



With ransomware, perpetrators leave very little footprints to no footprint at all.

A simple anonymous or spoofed email is sent to businesses or organisations and the mayhem starts.

Again a good infrastructure design for home & office networks and proper configuration will help lessen the problem.

Applications such as “RansomFree” from company Cybereason tackles the problem at the client level.

We like to tackle the problem with a more centralised approach.

As mentioned in our ‘Proxy’ note, implement a proxy and remove NAT per subnet.

Limit open ports on the proxy and firewall level.

Configure blacklists with known ransomware and malware domains.

Find sources that generate these blacklists daily or weekly and set up an automated job to fetch and maintain the blacklists.


For home networks, drop the use of conventional wifi routers such as Linksys, TP-Link etc. as the main Router and Gateway of the network.

Implement Pfsense firewall or a Ubiquiti EdgeRouter or even a Raspberry Pi running IPFire.

The latter depends on the amount of users and devices.

Pfsense and the EdgeRouter enables the set up of proper outbound access filtering that a conventional wifi router cannot.

Remove NAT per subnet and add NAT for only your proxy and configure blacklists.

Another option as implemented by our technicians is ‘DNS blackholing’.



Keep home networks in mind when maintaining client networks.

Build or install a good firewall.

“Plug ’n Play”


There is not much to say about plug ’n play other than it is an administrators worst nightmare.

Modem and router manufacturers such as Comtrend, Starbridge, Huawei, Linksys, TP-Link and others advertise their product as being out of the box plug ’n play.

This reduces the buyers overhead in configuring the device or finding someone that can configure it or even fiddling with it after to get specific service such as Vonage to keep working.

On most modems and routers, the plug ’n play feature is enabled by default.

Providers leave this feature enabled to give their users a ‘hassle free’ connection.

What this does on the other hand is make swiss cheese out of the built-in firewall.


Plug ’n Play allows devices that require access to certain ports and protocols to ‘push’ firewall rule creations from the internal network to the modem / router.

The modem / router automagically adds these rules.

In the case of conventional modems and routers such Comtrend, Starbridge, Huawei, Linksys, TP-link and others, this adds ‘Port Forwarding’ rules enabling inbound access to the requesting device.

Because DHCP is ran on almost every network, long after IP addresses are rotated, these rules will keep lingering.

Plug ’n Play can add rules but does not remove them.

A simple port scan from outside the network reveals these open ports.

Providers disable logging on their modems as these devices do not have enough resources.

Most users do not enable logging on their (wifi) routers, let alone set up syslog.


With a good configured stateful firewall, outbound connections that are established will be kept alive.

Connection tracking keeps track of connections thus ‘new’ connections mimicking already established connections will be dropped.

On any given day, an average user surfing the net, chatting, sending out emails and downloading does not require any inbound firewall rules to any device on their internal network.

"Modem 'backbone'"

(Adding to previous posts about infrastructure design and firewalls)
A lot of companies use the supplied ISP modem as their gateway, firewall, router and switch.
Keep in mind in such a scenario, if the modem needs to be replaced, back-office operations will have no connectivity.
No switching means no NETBIOS to fall back on, no file / print server access.
Your are effectively DOWN.
IT management overhead is increased due to office/companies being dependent on DHCP.

For this reason, use the ISP supplied modem as just a gateway.
1. Put the modem in "bridge" mode to prevent "double NAT" with the addition of a standalone firewall/router.
2. Install a gigabit switch as the backbone of the network running layer 2
3. Install a standalone firewall/router i.e Cisco, pfsense, opnsense, ipfire.
4. Configure "fixed" IP addresses on workstations
5. Move wireless mobile devices that DO NOT require access to company resources i.e file server, printers to a "guest" network.

Running a Layer 2 backbone, it is up to technicians how creative they want to be otherwise use the modem's wireless radio for Wifi access as this sits in-front of the standalone firewall, completely separated from the office network.

"MDNS, SSDP & plug 'n play"

(Adding to previous posts about "plug 'n play" and firewall filtering)
Running a packet analyzer such as wireshark / tshark, tcpdump, a lot of useful information is gathered.

7195 10409.301454 40:a8:f0:58:8b:29 -> ff:ff:ff:ff:ff:ff ARP 60 Who has Tell
7187 10396.509158 -> BROWSER 243 Host Announcement xxx-HP, Workstation, Server, NT Workstation, Potential Browser
6916 9938.667511 -> BROWSER 216 Get Backup List Request
7142 10313.068329 -> BROWSER 243 Local Master Announcement xxx, Workstation, Server, Domain Controller, Time Source, Print Queue Server, NT Workstation, Master Browser, DFS server
7.891809 -> SSDP 216 M-SEARCH * HTTP/1.1

In this post, we will concentrate on MDNS and SSDP as these pose security issues to systems / networks not properly configured.
MDNS, short for Multicast DNS, if not filtered will give out too much information (i.e hostname, IP, port, app version)
Executives on business trips, connect to wireless hotspots to keep up with work.
Windows 7 / Symantec Endpoint firewall, if left unconfigured will allow applications such as Google Chrome to set up inbound firewall rules for MDNS on UDP 5353.
This means that a would-be attacker prowling the hotspot, scanning for TCP 135, 445, 137, 138 / UDP 139, 1900, 5353 will find these open ports and throw their exploit against them.

SSDP, short for Simple Service Discovery Protocol, used by popular service Bonjour, is primarily used on private networks (RFC1913) 10.0.0/8, 172.16.0/12, 192.168.0/16.
If not filtered, SSDP can be exploited to respond to attacker commands on UDP 1900.
SSDP can be used to generate DDos attacks by modem/routers due to a fatal firmware flaw pertaining to the plug 'n play service.

Disable the plug 'n play service on ISP modems and wifi routers.
Proactively filter unused ports.

In addition to time synchronization, the ntp daemon ('ntpd') can handle control commands.
One of its control commands 'monlist' was identified as having high amplification ratio that can be used to amplify DDos attacks from vulnerable NTP servers.
Root Cause
The basic attack technique consists of an attacker sending a 'get monlist' request to a vulnerable NTP server, with the source address spoofed to be the victim’s address.
The following commands can help users verify if the REQ_MON_GETLIST and REQ_MON_GETLIST_1 responses of NTP are currently enabled:
ntpq -c rv <NTP_SERVER>
ntpdc -c sysinfo <NTP_SERVER>
ntpdc -n -c monlist <NTP_SERVER>
If ntp cannot be upgraded to version 4.2.7 the following workarounds can be applied:
1. add 'disable monitor' to ntp.conf
2. add 'restrict default noquery' to ntp.conf
Threat actors are circumventing DDoS (Distributed Denial-of-Service) mitigation solutions by taking advantage of the Universal Plug and Play (UPnP) protocol to mask the source port of packets sent during a DDoS flood attack. These attacks hide their source IPs using UPnP and then leverage DNS and NTP protocols during the DDoS flood.
UPnP enabled devices host the xml file 'rootDesc.xml'.
All an attacker has to do is search for the above mentioned file on for example:
Using a SOAP request, an attacker can push Port-Forwarding rules to the modem / router / gateway.
Because conventional modems / routers / gateways do not verify if the provided internal IP is an actual internal IP, they abide by all forwarding rules as a result.
In this case, Port Forwarding rules are therefore used to proxy connections from one external source to another.