"If just one user reports a phish, you can get a head start on defending your company against that phishing campaign and every spotted email is one less opportunity for attackers...but phishing your own users isn't your only option.
Try being more creative; some companies have had a lot of success with training that gets the participants to craft their own phishing email, giving them a much richer view of the influence techniques used. Others are experimenting with gamification, making a friendly competition between peers, rather than an 'us vs them' situation with security."
1. There's an option to hide the names of the employees. It would replace all the names with random animal name + a color. It's great if you don't want to know which employees are falling for attacks.
2. I love the idea to actually make the employees create their own attacks, but seems a bit hard to do and pretty much time consuming for a company.
Its not the actual individuals - its the culture it creates, "HA! We caught you, you dumbass, here's 2hrs of training". This means people are afraid to report or take ownership over looking out for phishing as it creates no benefit for them, its just there to make the security team smug.
Having been part of and designed these campaigns before (with open source options like https://getgophish.com/), there is no way to report as phishing or reward users who detected but therefore didn't interact with it. This means in your example - did the other 81% just not open it, ignored it, or actively thought it was phishing? These are key metrics a company needs to know their potential attack surface.
>How do you balance/deal with "security shaming", which is proven to put you further at risk as an organization?
I've had this happen to me, not for phishing, but for the kensington lock thing. Probably not that common any more, at least not in the west, but some workplaces have aggressive laptop locking policies. Workplace tried this stunt of confiscating laptops that were not locked, and everyone had to meet some manager type person. It was completely asinine. This is a typical badge access controlled workplace with additional security personnel. The laptop locks were a total overkill.
I find it amazing they offer a brilliant online streaming service but you have to be abroad (Gamepass - I'm UK based).
I pay something like 20GBP a month during football season and get every game live in HD via chromecast, iOS, Web or Android plus redzone. They evidently have the chops and infrastructure to do it but its all a question of running the oligolpoly with the broadcasting services in the US.
Thanks for noticing! I acquired the domain name a while ago. There must have been some suspicious content on it before. I have reached out to Bluecoat to see if they can reconsider.
I think has a lot to do with manual transmission which allows for natural and controlled engine breaking, thus not creating the brake light to come on (and therefore the wave of copy-cat type braking).
The worst I've seen has been in Johannesburg in South Africa, where automatic transmission is rare. I used to refer to it as concertina traffic before I knew it was called a traffic wave, and the concertina is a popular instrument in South Africa amongst the ancestors of the people who are usually responsible for these waves.
our product stores all the logs raw in flats files on the file system, we don't use databases for keeping the logs in, this allows you to scale massively (ingestion limit is that of the correlation engine and disk bandwidth). You then just need an efficient search crawler and use of metadata so search performance is good too.
Issue is if you every need to pull the logs for court and you have messed with them (i.e. normalized them and stuffed them into a DB) then your chain of custody is broken.
Best of both worlds means parsed out normalisation so I don't have to remember that Juniper calls source ip srcIP and Cisco SourceIP, but the original logs under the covers for grepping if you need.
Moved to PFSense a few months ago and I cannot recommend it enough, I have it on a Thinkserver tower which hosts all my VMs on ESX and out of a second NIC comes my wifi router.
Pfsense is such a great piece of software, DNS forwarder and build in OpenVPN.
I don't understand why PFSense and OPNsense use FreeBSD and not OpenBSD which comes with a more advanced version of PF.
Is there any reasonable explanation for their choice? I'm using FreeBSD myself but not as a router. If I should choose an OS for router, I'd probably go with OpenWRT or OpenBSD.
Another lover of PFSense here. I started out with M0n0wall, but there were a few items that drove me to pfSense ultimately (the slightly strange way setting up rules/port forwards, and the need for different IPSEC encryption algos for a corporate firewall connection.) I have pf humming along on an older Alix2d3 kit, and have had ZERO problems. I now see that there's a more powerful APU board that will be my upgrade path when this box dies, or I upgrade my internet beyond ~50mbps -- whichever comes first.
The statement that the "pf" in OpenBSD is "better" isn't necessarily true. The "pf" in FreeBSD and pfSense is a bunch faster, even on single-core.
the IPsec in FreeBSD and pfSense (especially AES-GCM) is also, much faster than that found in OpenBSD.
OpenBSD has a problem: it doesn't scale on multi-core CPUs, and the world has gone multi-core. FreeBSD took years to get this right (forking Dragonfly along the way due to disagreement about the MT model.)
I remember years ago we had a problem with pfSense because the way FreeBSD had implemented carp wasn't quite correct (WRT failover and groups of interfaces, IIRC). We had been relying on specific documented behavior in OpenBSD as we deployed OpenBSD firewalls, and whenwe switched to pfSense this bit us. There were workarounds at least.
I feel lucky like yourself of being in one of the divisions with massive growth.
I don't think people appreciate the size and breadth of IT IBM does, my "tiny" division of security if an independent company would be the 3rd biggest Security vendor in the market.
I think you are underestimating the scale of what a Z system can run... You might not have a use but every fortune 500 and banking institution in the world does - and they need it to work, 5 9's isn't good enough.
> IBM Hursley laboratory director Rob Lamb says: “There are 6,900 tweets, 30,000 Facebook likes and 60,000 Google searches per second." The mainframe CICS runs 1.1m transactions per second, which equates to 10bn per day [0]
That equivalence is bogus. If a search was as simple as a single CICS transaction, Google would just run that and be done.
Mainframes are overpriced and inefficient, but they are the only option for a F500 without the in-house talent to build any kind of distributed, fault tolerant system.
If as you say mainframes are overpriced and inefficient, that means there is a great opportunity for some organization to move with a much cheaper, more efficient option.
But people have been predicting the death of big iron for several decades now, yet they live on.
I don't doubt mainframes might be overpriced, but I also suspect the reason they persist is they have yet to come up with a cheaper option, offering the same performance figures.
Long one of the main reasons for
IBM mainframes was the bet your business
software that wouldn't run anywhere
else and that would be too expensive
to rewrite to run somewhere else.
Also, there is a remark that in major
parts of the financial industry, running
an IBM mainframe is nearly a necessary
condition for compliance.
> would be too expensive to rewrite to run somewhere else.
I'm don't doubt that is a major factor. Add to that the major risk that what every new system you move to might actually fail to work or end up costing more.
> If as you say mainframes are overpriced and inefficient, that means there is a great opportunity for some organization to move with a much cheaper, more efficient option.
But isn't that what Facebook, Google, and Amazon are doing? Using massively distributed commodity x86 hardware to eat away at incumbent businesses that would outsource their IT services to mainframes? Last I read, Google is about to go into auto insurance, and all three companies I listed do payment processing.
If software is eating the world, SV behemoths are eating business verticals.
The electrical power ($600/day vs $32/day), floor space (10,000 sq ft vs 400 sq ft), and cooling costs for those mainframes were less than those of distributed servers handling a comparable load. In addition, those mainframes required 80 percent less administration/labor (>25 people vs <5 people); “Mean Time Between Failure” measured in decades for mainframe vs months for other servers.
96 of the world’s top 100 banks, 23 of the 25 top US retailers, and 9 out of 10 of the world’s largest insurance companies run System z
Seventy-one percent of global Fortune 500 companies are System z clients
Nine out of the top 10 global life and health insurance providers process their high-volume transactions on a System z mainframe
Mainframes process roughly 30 billion business transactions per day, including most major credit card transactions and stock trades, money transfers, manufacturing processes, and ERP systems.
The new mainframe delivered in 2010 improved single system image performance by 60 percent, while keeping within the same energy envelope when compared to previous generations. And the newest mainframe which shipped in 2012 has up to 50 percent more total system capacity, as well as availability and security enhancements.
It uses 5.5 GHz hexa-core chips – hardly old technology. It is scalable to 120 cores with 3 terabytes of memory.
A single search on Google could lead to 10x-100x actual requests to the backend system. So the comparison here is not really a fair game.
Second, since I assume, most of the programmers here don't have access to mainframes, it is hard to testify those numbers. Also, since it is a benchmark it will be useful to revel what exactly the task they are using here, otherwise I would simply throw this claim into my 'pure PR mess, don't take it seriously' bin :)
CICS itself is essentially just a way to put
up the forms for a user interface, but
a CICS application usually makes
heavy use of database, say, relational database,
e.g., DB/2 although there may still be some
IMS usage still hanging on.
One use of CICS was for heads down medical
claims processing, across all four US
time zones. The site our team from
IBM Research visited wanted high reliability:
If the site was down for, say, an hour,
then the data entry staff would have to
be called back on a Saturday, for at least
half a day, at a higher rate per hour.
One such outage in a year, and the CIO
could lose his bonus. Two and he could
lose his job. The site was very uptight.
Getting into the glass house was
not easy; might have been easier to get
into the White House Oval Office.
At one time to make CICS more secure,
there was some interest in having
processor hardware support for
address sub-spaces. Another
idea was cross memory where a
program could call and execute, say,
a function in another address space.
There were also data spaces, that
is, address spaces with just data and
no code but that could be accessed
by other address spaces with code.
Net, the IBM mainframes are not really
simple things. Cloning one would not
be easy, and at IBM's next version of
hard/software, the clone could be
unable to run the newer software and
suddenly be a boat anchor.
I have practically zero experience with mainframes, but I've always heard about their insane throughput figures. I've never really seen much on their architecture. Any insight in how they work and do so much?
The mainframes have their own CISC type architecture and typically have massive amounts of cache and a high clock speed (5Ghz and above)
Their instructions sets are also different in that its not strictly of the von-Neumann variety. Mainframes can do things like memory copies directly in memory without requiring copying via registers. This kind-of attacks the von-Neumann bottleneck directly and is good for batch and high volume transaction processing.
The software for mainframes is also typically fused into their kernels, things like CICS and DB2 are not "user" programs, they're part of the OS so I/O is handled much better and things don't "block" as much.
How do you balance/deal with "security shaming", which is proven to put you further at risk as an organization?
There is some interesting research from the UK Government in this space - https://www.ncsc.gov.uk/blog-post/trouble-phishing#section_3
The relevant bit:
"If just one user reports a phish, you can get a head start on defending your company against that phishing campaign and every spotted email is one less opportunity for attackers...but phishing your own users isn't your only option.
Try being more creative; some companies have had a lot of success with training that gets the participants to craft their own phishing email, giving them a much richer view of the influence techniques used. Others are experimenting with gamification, making a friendly competition between peers, rather than an 'us vs them' situation with security."