The value of a pentest

Pentesting is a typical cybersecurity process. It is an activity by which an analyst, with the least information possible about a target, tries to find security vulnerabilities on it. The target is defined by a scope, which can be one or more web applications, mobile apps, IP ranges, or any other different list of assets.

An analyst or a team of analysts execute a pentest following similar steps that would be followed by a real attacker: gather information, map the attack surface, identify vulnerabilities, and exploit them.

The pentests are usually very restricted in time due to costs. For a pentest to have two weeks is a luxury. Therefore, the analysts have to know their tools and have to prioritize analyzing those parts of the scope with the bigger risk.

Some people question whether a pentest provides value. The reason they argue is that pentesters are more focused on finding any vulnerabilities than on providing value to improve the security posture of the organization. That’s a valid concern, but that is not an intrinsic problem from the pentest activity. A good pentest team will try to find meaningful vulnerabilities that, if processed and understood correctly, can help identify root problems very relevant for the security posture of the org.

For example, if in a pentest multiple injection vulnerabilities are found, we can extract a conclusion about the programming languages used, the safe frameworks used or not used, the secure development training, the web application firewall, lack of it, underutilization, etc.

For the pentest to provide value, the findings have to be meaningful. If the pentest reports only or too many low hanging fruits, it is worthless. It can even be worse than worthless. It can be counter-productive. If we flood engineering and system teams with meaningless vulnerabilities, they will waste time on issues that, although can be security improvements, the effort of implementing them is not worth their value. Additionally, if we give low value information to engineering and systems teams our reputation as a cybersecurity team will be damaged, and they will listen us less and less with the time.

Note/Edit: My colleague Thibault, an expert pentester, made me a comment about the idea of low hanging fruits: if there are many, we should not just ignore them, we have to reflect on the idea that maybe it’s because some of the security controls are not happening where they ought to.

It’s important that the cybersecurity team builds a reputation of common sense. If engineering teams know that when the cybersecurity team reports an issue, it is an issue worth investigating, they will be our ally. If we report to them anything that is a security improvement, but they are leftovers, they will see us as a team that provides them information to consider, but not valuable information.

However, for the pentest activity to provide value to an organization, it is also important that the blue team, the team that receives the pentest report, is well prepared. If a security team receives a pentest, and minimize the value of it without a good reason, or focus too much on small issues, instead of trying to see the big picture and identify the root causes, the pentest can be converted to a checklist to close things, instead of being the trigger to identify actions to improve the overall security.

The report is key. It should include what has been tested at a high level, not only the vulnerabilities. It should be possible to know what kind of tests have been done. It should also mention explicitly what went well and what failed. Any limitation to the normal execution of the pentest should be included.

It’s important that the pentest does not give a fake sense of security. No findings does not mean that there is no security issue to be fixed. It just means that with the limitations established, this pentest team was not able to find any issue or any relevant issue.

The severity of the findings should be explained very well by the pentest team. They should not be exaggerated. Any FUD used in the report will go against the value of the pentest.

The findings should be documented in a way that it is easy for the reader to reproduce them. Paste in the report text instead of screenshots when it’s possible. That will help anyone trying to reproduce the issues.

If the findings are not reproducible the pentesters should have a very good reason to report them. In general, if a finding is not reproducible, the pentest team should document it as an anomaly, but not as a security issue.

The same happens with unexpected errors. A 500 HTTP error can be a hint that an input validation is not happening, but if the pentest team cannot prove it’s a security issue, this should be reported as informational only.

The environment where the pentests are executed is important. The more similar to production, the better. Since some time ago, the rise of bug bounty programs have demonstrated that it is ok to pentest in production, so do whatever you need to be able to let pentesters pentest in production without a relevant risk of damaging the production systems.

Don’t confuse pentesting with red teaming. Pentest is a very specific activity that tries to find vulnerabilities on a scope. The red team exercise is an engagement where a team tries to breach a system or an organization with the objective of testing their security processes. For example, in a red team you want to test the detection process, but probably not in a pentest.

If a pentest is done at regular intervals, the organization can measure the improvements in security and can have a good view of what needs to be improved.

As with many activities and tools, the value a pentest provides depends on the team who participate.

Please, rate this post:
[Total: 0 Average: 0]