05 Jul 2015 @ 8:53 AM 

If you haven’t heard of “crawling” you’ll need these tips to get on top of search listings

seo-banner image www.pppcorporate.com

Search Engine Optimisation (SEO) is a long-term strategy. It’s not a ‘set and forget’ tactic. You’ll need to consistently monitor the results you extract from your analytics, analyse them and then tweak your website and copy accordingly so that your eCommerce store flourishes, says Sensis senior manager of SEO Kavit Shah.

Remember, if you’re not constantly updating and staying fresh, your competitors will be. It’s only a matter of time until they begin to outrank you and take your share of the online traffic, says Shah.

To ensure that you’re always front and centre of your customers at that critical point, here are the top five tips for making SEO work for your eCommerce business, according to our SEO expert:

1)      What’s your point of difference?

Using the power of your content, communicate a unique and attractive identity online to differentiate yourself from competitors, says Shah.

Describe your products and services comprehensively to guarantee the search engines “crawl” that content and are able to put it up front of searches.

2)      Mobile-optimisation 

Ideally, your website should be optimised for display across multiple devices and screens so that users can easily read and navigate through it using any device – whether it be the traditional desktop, laptop, tablet, smartphone or another smart device.

3) Internal Links

On an individual page, search engines need to see content in order to list pages in their keyword-based indices. They also need to have access to a “crawlable” link structure, not only to allow users to click through and navigate your site, but also for Search Engine “robots” (also known as spiders) to follow and crawl your content, says Shah.

Without such links, the Search Engine spiders hit a virtual “dead end” on your website and are forced to leave the site with their crawl unfinished.
This prevents them from seeing the entirety of your website, potentially stopping them from showing your content on their Search Engine Results Pages (SERPs).

4) Measurement tools

Ensure that you have some sort of analytics and tracking on your website so that you are able to see what pages are most visited and what pages are getting the most referrals from Search Engines and why, says Shah.

“This will give you insights into the user’s mindset and their behaviour. These insights will then help you tweak and change your content and layouts to better suit the users.”

5) Forget keyword stuffing

“Do not spam the search engines by stuffing your content and pages with keywords for the sake of ranking. This will not help you rank. Keyword stuffing is shunned by search engines,” says Shah.

Google’s algorithms are now clever enough to recognise when a sentence or paragraph is written solely for the purposes of ranking and will be useless to a human user.

“Search Engines often penalise such sites and pages by removing them from their index of results.”

Henry Sapiecha


Viewability: mobile is better than desktop, direct is better than automated

By Sarah Homewood

Ads served programmatically through trading desks, have decreased viewability compared with ads served directly through publishers, according to a new study.

The report also found that online ads with greater interactivity are more likely to be ‘viewed’, and that mobile ads are more viewable than their counterparts on desktops.

The report, by Independent ad server Sizmek covers January to December 2014, and analysed viewable data from more than 240 billion measured impressions, from more than 840,000 ads and 120,000 campaigns served in 74 countries to more than 22,000 publishers and 43 programmatic partners.

Viewability is one of the biggest challenges facing publishers, with the IAB trialling measures both in the US and Australia to get a better understanding of what counts as an ad being ‘viewed’.The measures that the Interactive Advertising Bureau (IAB) came up with mean that online display ads are counted as ‘viewed’ if they have been on screen for at least 1 second and were at least 50% loaded. Video ads are likely to be counted as ‘viewed’ if they played for two seconds.Alex White, VP of product strategy at Sizmek said: “The specifics and definitions will no doubt continue to be debated, but the recent efforts at standardizing viewability terminology move the industry toward a more transparent marketplace for digital ads, and our research backs that up,”

“Clearly, measuring whether an ad is viewable gives the industry a starting point for trading in true engagement.”

In December last year Google found that more than half (56.1%) of all digital impressions of online ads are not seen.

The search giant found that a small number of publishers we’re serving most of the non-viewable impressions causing the numbers to be skewed slightly higher. This means overall 56.1% of all impressions are not seen, however the average publisher viewability sits at 50.2%.

CEO of the IAB in Australia Alice Manners told AdNews at the time that the IAB Australia supports the move towards viewability and is in the process of working with agencies and publishers to educate the market on the measure.

“Everyone agrees that the move towards viewable impressions is the right thing for the industry,” Manners said. “There are still technical challenges around it and reasonable expectations should be set in monitoring impressions.”

The IMAAG group, which is a merger of IAB’s Agency Advisory Board and the MFA’s Interactive Committee, is currently in the process of developing a code of conduct which will focus on fraudulent traffic, viewability and verification.


Henry Sapiecha

Tags Categories: ADVERTIZERS Posted By: Henry
Last Edit: 02 Apr 2015 @ 08 52 AM

EmailPermalinkComments (0)

 16 Aug 2014 @ 8:05 AM 

internet cables image www.pppcorporate.com

This week a technical hitch caused websites to wobble worldwide. Tom Chivers discovers the net is held together with chewing gum and string

On Tuesday, at 8.48am British Summer Time, Verizon, a major US internet service provider (ISP), did something relatively mundane and technical: it took some big groups of IP addresses, which we can think of as the phone numbers of the internet, one of which is designated to every desktop computer, tablet or smartphone – and divided them up into smaller blocks, so that it had more addresses to play with. And in doing so, through no fault of its own, it broke the internet (a bit).

Major websites around the world slowed down, locked up or refused to allow visitors to log in with their usernames and passwords. The most high-profile casualty was eBay: British users were unable to log in for much of the day, causing traders to demand compensation for lost sales. The question is, how did a boring little reallocation of some addresses by an American telecoms company knock over large bits of the internet all around the world?

The answer is complicated, according to Dr Joss Wright, a computer scientist at Oxford University. “There are very few experts in this. It really is the deep magic,” he says. But fundamentally the difficulty lies in the fact that no one planned and built the internet: it grew organically, like a weed. When problems arose, engineers found ways to patch them or work around them. But sometimes those fixes became problems themselves a few years later. And that’s what happened on Tuesday.

The origin of the Verizon meltdown begins with something called the border gateway protocol (BGP). In every major internet hub – ISPs such as Verizon and BT, but also big businesses and universities – there are routers, large versions of those little black boxes with blinking lights that you probably have somewhere in your house powering your Wi-Fi. The job of those routers is to find a path for data from one bit of the internet to another: from your computer on your desk in Huddersfield, for example, to the hotel in Sydney where you’re trying to reserve a room (or rather, from the big BT hub a mile or so from your desk in Huddersfield to the big Telstra hub near the hotel in Sydney). But the internet is a thicket of countless millions of possible routes, so to find their way across, the routers keep a record of the most reliable ones.

That record is, in essence, a big list – a list with 512,000 entries, on older Cisco machines – with each one storing a route to a group of IP addresses. This is the BGP routing table, and it is how one bit of the internet finds another bit.

The trouble is, though, that the internet is built on old software and decisions, and ancient, fudged repairs. “That is the fundamental problem of the internet,” says Wright. “There are 2 to the power 32 IP addresses – about 4.2 billion. That was the number chosen, arbitrarily, in the naive days of the internet, when no one knew it was going to be the global system it is now. Now there are about 4 billion computing devices trying to use the internet, and we’re running out of addresses.”

In the early days of the internet, when regulators were handing out this seemingly inexhaustible supply of IP addresses, they did so generously. Stanford wants 16 million IP addresses? Sure, let them have them. We have lots. But Stanford, or whoever, doesn’t need 16 million IP addresses – it only uses a few tens of thousands – so now that we are running short, the regulators have to claw back groups of them and divide them up, like bulbs in the garden, in order to create additional groups of IP addresses that can be reallocated where they are needed. This is what Verizon did with their collection of inhouse IP addresses: they subdivided their groups into smaller groups, temporarily making about 15,000 new groups of addresses. But every time more groups are created, new routes are needed in the BGP table, so that the routers can find them.

Unfortunately, as well as the 4.2 billion IP address limit, the internet is bound by another arbitrary restriction: the 512,000 slots in the BGP grid. That seemed like a huge amount when it was put in place, but as of the morning of August 12, most ISPs’ routers had about 500,000 of their places already full. So when the 15,000 new Verizon routes popped in, suddenly it pushed the number of routes that the routers had to remember up to 515,000. The older machines just couldn’t handle this, so they broke: they shut down, failed to remember new routes, or forgot old routes. And that’s why the internet broke on Tuesday.

“It’s not the first day of the Apocalypse,” Wright says, soothingly. Both the shortage of IP addresses and the shortage of slots in the BGP grid can be fixed. The fixes sound simple, if technical: there is a new IP address system, version 6, which ISPs can upgrade to, which would create lots more addresses; and it is possible to reallocate some of the memory in your router to increase the number of BGP routes by another quarter of a million or so, which would stave off problems like the Verizon one – at least for the foreseeable future.

The reason that this has not happened already is that it is a risky process. “To fix it, they need to reboot the routers, and lots of them will be old machines that have never been rebooted before, and sometimes when you reboot something like that it doesn’t switch back on again,” says Wright. “Something could just go ‘Bing’.” On a similar note, changing from a broadly accepted system to a new, less widely supported one could lead to all sorts of failures. “It involves getting everyone to agree on the protocol. It’s like getting everyone to stop speaking English and speak Esperanto instead. It could be done, in theory, but it’s going to be tricky.”

We view the digital world as a place of constant innovation. But because of the risks of upgrading the infrastructure on which it relies, engineers are under pressure not to experiment, and not to fix things before they become a problem. Now that a definite problem has arisen, and caused a fairly major outage for some fairly major internet players, ISPs might overcome their innate (and entirely sensible) conservatism and make the switch.

Of course, this will only push the problem down the road a few more years, because the internet – as noted – is a patchwork quilt of fixes and workarounds and temporary solutions. “The internet – you have no idea. It’s held together with chewing gum and string,” sighs Wright. “If everyone said, we don’t need the internet for a year, let’s shut it down, we could make it so much better. But we can’t do that.”

The Telegraph, London

Henry Sapiecha

 02 Jul 2014 @ 12:13 PM 


They say “crime pays” — but we can be certain the paychecks for cybercrime come right out of the pockets of every business with a digital footprint.

In March, Juniper Networks and RAND Corporation released Hackonomics: A First-of-Its-Kind Economic Analysis of the Cyber Black Markets; its conclusion that the “Cyber Black Market” is more profitable than the global illegal drug trade led us to examine the cost of the cyber black market on businesses.

Actual costs of cybercrime are much debated, and the dozens of threat reports issued in 2014 differ on the details. This is likely because companies have a hard time knowing what was stolen, among other complex issues that keep surveys, reports and studies from being accurate.

It may also have a bit to do with the fact that some of the companies issuing reports — namely, ones that sell cybercrime prevention and detection software — are stakeholders in cybercrime’s reputation as a growth industry.

One well-known example of fudging was the 2009 report by the Center for Strategic and International Studies, which estimated hacking costs to the global economy at $1 trillion. President Barack Obama, various intelligence officials and members of Congress have cited this number when pressing for legislation on cybercrime protection.

office worker at desk shadow image www.pppcorporate (2)

IBT said in 2013:

Turns out that number was a massive exaggeration by McAfee, a software security branch of Intel that works closely with the U.S. government at the local, state and federal level.

A new study by CSIS found numerous flaws in the methodology of the 2009 study and stated that a specific number would be much more difficult to calculate.

The 2014 CSIS report, still done in partnership with McAfee, produced numbers that varied so widely it still raised an estimated one trillion eyebrows when it hit the press, though their $100 billion – $400 billion range was still a fraction of the 2009 FUD sideshow.

How much does getting hacked actually cost a business?

Wading through the reports will introduce you to a frustrating range of guesstimates on “the cost of hacking” — and different ideas of what that means, exactly.

Researcher Kelly White condensed 23 — some, but not all — of 2014’s threat reports into one entertaining, graphic-heavy document entitled “Paper: The Best of The 2014 InfoSec Threat Reports.”

However, the tightest recent concentrated report focused on costs conducted independently from a company was Ponemon Research Institute’s “2013 Cost of Data Breach: Global Analysis.”

The global benchmark report was independently conducted for Symantec and sponsored by IBM; it included nine countries in its goal to nail down the cost of the average consolidated data breach.

The report found that the highest notification costs associated with data breaches, the highest ex-post response costs, and the highest lost business cost was experienced by U.S. organizations.

Cost estimates and their differences can be attributed to a number of factors; the benchmark report identified four primary cost centers for businesses hit by a data breach: Detection or discovery, escalation, notification and ex-post response.

There are the types of attacks and threats companies face in differing sectors — some sectors have higher value data than others. Breached companies will also face differing fines under data protection regulations and laws.

hacker at screen in green image www.pppcorporate.com

2011 saw 232 million identities exposed in data breach incidents — this number more than doubled in 2013.

There are incident response costs, and costs associated with detection and escalation of data breach incidents, such as forensic and investigative work, assessments and audits, crisis team management, plus communications and reports to executive management and board of directors.

Then there are the notification costs — alerting victims that their personal data has been compromised. This includes IT work associated with the creation of contact databases, determination of all regulatory requirements, engagement of services for consumer protection (such as identity theft services and credit report monitoring for individuals), postal expenditures, and the setting up of secondary contacts to mail or email bounce-backs and inbound communication.

Don’t forget the lawyers. Or the redress costs, like replacing credit cards. Or the cost of lost business, which can include customer turnover, “increased customer acquisition activities, reputation losses and diminished goodwill.”

Accordingly, our Institute’s research shows that the negative publicity associated with a data breach incident causes reputation effects that may result in abnormal turnover or churn rates as well as a diminished rate for new customer acquisitions.

According to Symantec’s 2014 report, 2011 saw 232 million identities exposed in data breach incidents — this number more than doubled in 2013, with more than 552 million identities breached. Eight of the breaches in 2013 exposed more than 10 million identities each.

In “Cost of Data Breach” the average breach increased from $130 to $136 per record, adding “However, German and U.S. organizations on average experienced much higher costs at $199 and $188, respectively.”

Hacker-shadow seated hands image www.pppcorporate.com

The report examined 277 companies in 16 industry sectors “after those companies experienced the loss or theft of protected personal data.”

It is important to note the costs presented in this research are not hypothetical but are from actual data loss incidents.

We do not include organizations that had data breaches in excess of 100,000 because they are not representative of most data breaches and to include them in the study would skew the results.

(…) The average cost of a data breach in our research does not apply to catastrophic or mega data breaches because these are not typical of the breaches most organizations experience.

The 2013 report notes that malicious or criminal attacks are the most costly data breaches incidents, and “German companies were most likely to experience a malicious or criminal attack, followed by Australia and Japan.”

Ponemon found that seven key factors impacted the cost of a company’s data breach.

Ways to bleed out, a little less

The costs may sound alarming, and they are — but in an environment where everyone’s a target, the data shows that taking steps to reduce harm from potential breaches will save you in both costs and reputation damage.

Simply having an incident response plan in place, the report said, could reduce the cost by as much as $42 per record.

U.S. and U.K. companies showed a reduced cost in their data breaches when a CISO was in place. The study noted, “This factor did not have the same level of impact in India and Brazil.”

Additionally, in the U.S., companies that hired consultants for incident triage, containment and response were able to reduce the cost “an average of $13 per compromised or exposed record.”

According to Ponemon, a strong security posture has the potential to reduce costs in U.S. companies by as much as $34. Security posture, at least in the benchmark study, was attributed to companies that had a Security Effectiveness Score (SES) at or above the average.

If the data breach stemmed from third party errors, this was shown to increase the cost by as much as $43 per record in the U.S.; if the data breach involved lost, stolen or compromised hardware (such as laptops, phones or other devices) the cost was increased by as much as $10 per record.

empty pockets man image www.ppcorporate.com

Seasoned hackers will read this analysis and think that what’s here is obvious. But to slower-moving institutions and, regretfully, negligent gold-diggers like Yo App, a data breach feels like a nuclear blast; the essential advice to be gleaned from reports like Ponemon’s is out of reach.

Henry Sapiecha



man in suit runs from low flying plane image www.pppcorporate.com

No one wants to think about the idea of their company’s customer data, infrastructure, IP or network security as the full-time target for hired-gun hackers, government spies or crime syndicates around the world.

Unfortunately, it’s true. Your most vulnerable point of attack is often the people you trust the most: your employees.

By the standards of today’s black market for thieves, your employees are in the crosshairs for some of the most serious attacks on your company. A new report from RAND Corporation “Markets for Cybercrime Tools and Stolen Data” (commissioned by Juniper Networks) explains that in addition to unpatched vulnerabilities, the human element will continue to increase as the weak point for attacks.

Updates, you can do. Vulnerabilities can be patched. But people… are people.

The majority of successful security defeats are phishing attacks, where the victim clicks a link or downloads an app or attachment that infects…anything it wants to. And a phishing attack can to a lot of damage.

One email spiked with innocuous-looking malware to a vendor cost Target an estimated 40 million credit cards and 70 million user accounts, which were hijacked and sold on the black market within days. Target’s December disaster came from a phishing attack sent to employees at an HVAC firm it did business with.

What’s worse, employee-targeted attacks, when successful, often go undetected until it’s too late. According to Inside the Hacker’s Playbook.

76 percent of breached organizations needed someone else to tell them they’ve been hacked. Employee awareness could be worth more than the latest anti-malware software, and will save you millions in the race to prevent cyber theft. (Trustwave, 2013)

Each of the following pages show ways hackers can access critical information from a company’s employees:

CARETO HACKERS MAP IMAGE www.pppcorporate.com


  • The Front Page News Attack

    Right now, phishing is among the primary ways unwitting employees are used to attack your company. Phishing attacks are currently sophisticated in a few very specific ways, and RAND’s report tells us that phishing is only going to get more sophisticated as the black market for cybercrime matures.

    Today’s typical phishing attack is an email disguised to look familiar, fooling the employee to click on a link or download an attachment. But the trend for cyber criminals is exactly that: popular trends, and most especially front-page news.

    RAND explains the black market trend in news-item phishing, which often play on emotional events. “Different pieces of the market react differently to outside events (e.g., natural disasters, revelations to Wikileaks, or releases of new operating systems).

    Front-page news items are often used in spear-phishing campaigns (e.g., “click this link to donate to victims of Haiti earthquake”) raising the number of potential victims.”

    bad android symbol image www.pppcorporate.com


  • Bad Android

    Cell carriers are training users to accept text messages with links, and that’s not good. According to RAND’s new report, the use of social networks and mobile devices will continue to be growth areas for black market cybercrime.

    “The development of mobile malware for Android devices (70 percent of all mobile attacks) is likely to continue until Google, device manufacturers, and service providers work together to find a way of delivering updates and patches to users as they come out (only 12 percent of Android devices have been updated to the versions that prevent premium SMS charges being run up on the phones of unsuspecting users).”

    Employees need to be warned that texts open a link up in their mobile browser, which can cause just as much harm with password-stealing malware as in your computer’s browser. Mobile browsers are subject to the same sorts of bugs, and it’s quite easy for a criminal to spoof a mobile website.



  • Traveling Employees: Easy Targets

    Employees that travel are extremely vulnerable to attacks, and often don’t know they’ve been compromised — because they don’t know how to safeguard their devices, their network access, or what to look for as signs of compromise.

    One such common attack is called the “Evil Maid Attack,” referring to when a criminal accesses the employee’s unattended computer, phone, tablet or hard drives, usually left in a hotel room.

    Devices can be physically compromised in less than sixty seconds, loaded with malware that leaves no trace, can report back “home” and can spread more malware to your company upon return to the home network.

    war games image www.pppcorporate.com


  • Compromised Companies We Trust

    In the current trend of sophisticated attacks, your employee hasn’t clicked on a “weird looking” link at all: they clicked on a link that belonged to a large business whose server was hacked.

    RAND’s report tells us about the “recent increases in the use of watering-hole attacks (where users visit popular, legitimate, but compromised websites) based on well-known exploit kits available for sale on the black market.

    Last week, an EA Games server was revealed to be compromised and running a phishing operation in which unwitting visitors signed in with their login credentials as usual, not suspecting they were handing hackers access to their accounts. A similar watering hole attack was also in progress at a site with an EA Games subdomain that was taking users’ Apple ID credentials.

    shopping trolley full of gifts image www.pppcorporate.com


  • Twisted Ads: Poisoned Shopping Trip

    Your employees are targets outside of your network, too.

    Employees might use compromised wireless networks to access corporate assets, log in on someone else’s device or computer in an emergency, or put USB sticks from compromised sources in their laptops.

    Logging on to work email on someone else’s device or computer can allow a hacker to sniff login credentials and passwords.

    If an employee works remotely, hackers can easily “sniff” their internet traffic over unprotected Internet access (Wi-Fi or wired) if the employee doesn’t use a secure VPN to protect their Internet activities.

    shady keyboard operator image www.pppcorporate.com

    • The Dangers of Working Remotely

      Your employees are targets outside of your network, too.

      Employees might use compromised wireless networks to access corporate assets, log in on someone else’s device or computer in an emergency, or put USB sticks from compromised sources in their laptops.

      Logging on to work email on someone else’s device or computer can allow a hacker to sniff login credentials and passwords.

      If an employee works remotely, hackers can easily “sniff” their internet traffic over unprotected Internet access (Wi-Fi or wired) if the employee doesn’t use a secure VPN to protect their Internet activities.

      Henry Sapiecha

Top 5 Reasons Why Backup is Not Disaster Recovery

By Zerto, on 28 January, 2013

Today’s post was written by Jennifer Gill, Zerto’s Director of Product Marketing.

Many organizations have a backup strategy but not a disaster recovery strategy, why? Because they think that if they have backup then they have a disaster recovery plan.  Not quite. Here are 5 reasons why backup is not disaster recovery.

Backup Disaster Recovery1. Service levels – low recovery point objectives and recovery time objectives. 

Backup products do not deliver recovery point objectives of seconds and recovery time objectives of minutes. Backups typically happen once per day and at night, so your recovery point objective could be 23 hours. If you are protecting a mission critical application, 23 hour data loss is not acceptable. Rebuilding a virtual machine, and everything that goes along with it, from tape can take days. If you are rebuilding from disk, it might be a little faster – a few hours. Again, this is not a service level that a mission critical application can tolerate.

2. Application impact: Performance and backup window. 

There is a reason why backups occur at night – making a copy of an application and its data drains the CPU on the server. If you need more aggressive RPOs than 23 hours as stated above, that means you have to create copies more frequently. This is possible, but at the expense of CPU. This significantly impacts end-user productivity. Additionally, the backup window is a fixed period of time. As stated, this occurs overnight so most organizations assign 8 hours for a backup to complete. The application must be quiesced and then copied. As the applications grow and grow, quiescing the application and backing it up cannot be completed in the backup window.

3. Retention. 

Backups are typically stored for a very long time for compliance and audit purposes. Disaster recovery information is stored for hours or days. Additionally, for a backup, you will have just one snapshot of the application and data. For an enterprise-class disaster recovery solution, you will have several points in time to failover to, just in case the last point in time is corrupted.

4. Automated recovery. 

Building the environment from a backup, especially a tape backup, is extremely time consuming. This is why the recovery time objectives are so long. With an enterprise-class disaster recovery solution, the entire recovery process can be automated. The VMs on the protected site will automatically be shut down, and then the replicated VMs on the replication site will be started. Any re-IPing will happen to ensure end-users have fast access to the application and data. For mission critical applications, this entire process should take just a few minutes. This is a very different service level from a backup solution. Additionally, an automated process is a foolproof process, since every manual step that is introduced is an opportunity for an error. A disaster recovery strategy must eliminate as many opportunities for error as possible – automation accomplishes this and even verifies it through non-disruptive testing. It is critical that testing can be done without impacting the applications and data so that end-user productivity is not affected in anyway. Once the testing is complete, customers know that failover, recovery and failback will perform as the business requires.

5. Reverse replication. 

Once an application is available on the replication site, end-users are using it, which is great. However, you must make sure that this application continues to be protected. A backup solution will not start taking backups and ship them back to the production site. A disaster recovery solution will ensure the replicated application is protected by replicating back to the source site.

Henry Sapiecha

Dutchman in ‘biggest cyber attack

ever’ had mobile van, hack bunker

A Dutch citizen arrested in northeast Spain on suspicion of launching what is described as the biggest cyber attack in internet history operated from a bunker and had a van capable of hacking into networks anywhere in the country, officials said on Sunday.

The suspect travelled in Spain using his van “as a mobile computing office, equipped with various antennas to scan frequencies,” an Interior Ministry statement said.

Agents arrested him on Thursday in the city of Granollers, 35 kilometres north of Barcelona, complying with a European arrest warrant issued by Dutch authorities.

He is accused of attacking the Swiss-British anti-spam watchdog group Spamhaus which produces a blacklist of spammers, including those distributing ads for counterfeit Viagra and bogus weight-loss pills reaching the world’s inboxes.

The statement said officers uncovered the computer hacker’s bunker, “from where he even did interviews with different international media.”

The 35-year-old, whose birthplace was given as the western Dutch city of Alkmaar, was identified only by his initials: S.K.

The statement said the suspect called himself a diplomat belonging to the “Telecommunications and Foreign Affairs Ministry of the Republic of Cyberbunker.”

Spanish police were alerted in March by Dutch authorities of large denial-of-service attacks being launched from Spain that were affecting internet servers in the Netherlands, United Kingdom and the US These attacks culminated with a major onslaught on Spamhaus.

The Netherlands National Prosecution Office described them as “unprecedentedly serious attacks on the nonprofit organisation Spamhaus.”

The largest assault clocked in at 300 billion bits per second, according to San Francisco-based CloudFlare, which Spamhaus enlisted to help it weather the onslaught.

The attack was later described by critics as a PR stunt for CloudFlare, but Spamhaus confirmed it was the biggest attack ever leveled at its operations.

Denial-of-service attacks overwhelm a server with traffic, jamming it with incoming messages. Security experts measure the attacks in bits of data per second. Recent cyber attacks – such as the ones that caused persistent outages at US banking sites late last year – have tended to peak at 100 billion bits per second, one third the size of that experienced by Spamhaus.

Netherlands, German, British and US police forces took part in the investigation leading to the arrest, Spain said.


Henry Sapiecha

 16 Apr 2013 @ 9:55 AM 


Fibre-bliss: Japan is now home to 2 Gbps interent.

This post was originally published on Mashable.

Fantasy Footwear

While Australians compare the merits of Labor’s fibre-to-the-home national broadband network with the Coalition’s fibre-to-the-node proposal, Sony has installed the world’s fastest home internet connection in Japan.

So-net Entertainment, a Sony-backed Japanese ISP, has launched a fibre-based internet service that reaches download speeds of 2 gigabit per second (Gbps), making it more than 20 times faster than the offerings of both Labor and the Coalition in Australia.

The Nuro, as the service is called, is available to homes and small businesses in Tokyo and six surrounding prefectures, Computerworld reports.

The upload speed is a little slower than download at 1 Gbps, but it’s still faster than most of us get anywhere else in the world.

By comparison, the ultra-fast Google Fibre broadband internet service offers a “mere” 1 Gbps download speed – which is still some 100 times faster than today’s average home internet connection – in Austin, Texas and Kansas City, Missouri in the US.

Nuro costs 4980 Yen ($A50) on a two-year contract, plus a 52,500 ($524) installation fee.

Henry Sapiecha


hovercraft on the golf course image www.pppcorporate.com



Henry Sapiecha

 08 Jul 2009 @ 11:57 AM 


network_connection_control_panel loudhaler-man

As this is the first posting in this site on the subject of

PAY PER PLAY I shall keep it brief until such time as data and content  are accrued for you to view and assess with the purpose of preparing a budget for your audio advertizing campaign.


1…Targeted subject matter and web content for your message

2…Cheap package deals for coverage of your desired fields

3…Only pay for each time an audio connected page is opened

4…Weekly, Monthly, Yearly or volume of hits plans available

5…Change your message at pre arranged intervals

6…Short term contracts available

7…Only cents per message to pay




Published by Henry Sapiechas-logo-hs-signatureCEO – PPPCORPORATE



\/ More Options ...
Change Theme...
  • Users » 8
  • Posts/Pages » 13
  • Comments » 12
Change Theme...
  • VoidVoid « Default
  • LifeLife
  • EarthEarth
  • WindWind
  • WaterWater
  • FireFire
  • LightLight


    No Child Pages.


    No Child Pages.


    No Child Pages.