attack. For example, `/^\.com$/`. Redirecting all requests to `/` with a `.` at the beginning will not match against the `/` pattern. Redirecting requests to `/` with a `.` at the end will also not match against the `/` pattern. To protect your application from this vulnerability, you can either change the regex to something like `/^\.{5}$/`. Securing the regular expression is a good practice, but the easiest way to protect against this issue is to make sure you are not using `.` in your regular expressions.

“.” and “..”

The two characters `.` and `..` can be used in regular expressions to match directories and files respectively. However, they can also be used as metacharacters that are not interpreted by the regular expression engine, but rather take on their literal meaning of "dot" and "greater than" in your pattern. This allows for a vulnerability where the following sequence of characters in a URL matches the pattern, `/^\.{5}$/`. If you are using these characters to match directories or files, make sure you are using them literally in your pattern.

Robots.txt Protection

Robots.txt is a simple file containing URLs of your website’s pages that are not available to search engines, such as Google. They are used primarily to make your website crawlable easier for users and search engines. If you want to restrict access for search engines, use a robots.txt file in the root directory of your website and add the following lines:
# User-agent: *
Disallow: /

Timeline

Published on: 05/19/2022 15:15:00 UTC
Last modified on: 07/25/2022 18:20:00 UTC

References