Re: Admin user privilege elevation (how to prevent it)
On Sat, May 12, 2012 at 5:11 AM, Josh Cartmell <joshcartme@gmail.com> wrote:
> I work a lot with Mezzanine which is a CMS that uses Django. A
> security issue was recently revealed where an admin user, lets call
> him A, (they can post rich content) could put a cleverly constructed
> javascript on a page such that if a superuser, let's call her B, then
> visited the page it would elevate A to superuser status (a more
> thorough explanation is here:
> http://groups.google.com/group/mezzanine-users/browse_thread/thread/14fde9d8bc71555b/8208a128dbe314e8?lnk=gst&q=security).
> Apparently any django app which allowed admin users to post arbitrary
> html would be vulnerable.
>
> My first thought was that csrf protection should prevent this but alas
> that is not the case. The only real solution found is to restrict
> admin users from posting any javascript in their content, unless you
> completely trust the admin users.
This isn't a CSRF issue. CSRF stands for Cross Site Request Forgery. A
CSRF attack is characterised by:
* A user U on site S, who has credentials for the site S, and is logged in.
* An attacking site X that is visited by U.
* Site X submits a form (by POST or GET) directly to site S; because
U is logged in on S, the post is accepted as if it came from U
directly.
CSRF protection ensures that site X can't submit the form on the
behalf of U - the CSRF token isn't visible to the attacker site, so
they can't provide a token that will allow their submission to be
accepted.
What you're referring to is an injection attack. An injection attack
occurs whenever user content is accepted and trusted on face value;
the attack occurs when that content is then rendered.
The canonical example of an injection is "little johnny tables":
http://xkcd.com/327/
However, the injected content isn't just SQL; all sorts of content can
be injected for an attack. In this case, you're talking about B
injecting javascript onto a page viewed by A; when A views the page,
the javascript will be executed with A's permissions, allowing B to
modify the site as if they A.
Django already has many forms of protection against injection attacks.
In this case, the protection comes by way of Django's default template
rendering using escaped mode. If you have a template:
{{ content }}
and context (possibly extracted from the database):
<script>alert('hello')</script>
Django will render this as:
<script>alert('hello')<script>
which will be interpreted as text, not as a script tag injected into your page.
That said, the protection can be turned off. If you modify the template to read:
{{ content|safe }}
or
{% autoescape off %}
{{ content }}
{% endautoescape %}
or you mark the incoming string as "mark_safe" in the template
context, then the content will be rendered verbatim -- which means
that the Javascript will be executed.
I'm not intimately familiar with Mezzanine or DjangoCMS, but based on
the nature of those tools (i.e., tools for building end-visible
content), I'm guessing they've marked content as safe specifically so
that end users can easily configure their CMS sites by putting HTML
into a field somewhere on the site. The side effect is that they're
implicitly saying that *all* user-supplied content is safe, which
provides the channel by which an attacker can do his/her thing.
The lesson from this? Even when you think you can trust a user's
content, you can't trust a user's content :-)
> My question is are there any other solutions to these sorts of
> problems? It seems like allowing an admin user to post javascript is
> reasonable, what is unreasonable is for that javascript to be able to
> elevate a user's privilege. Could improvements be made to the csrf
> mechanism to prevent this sort of user privilege elevation?
As I've indicated, there is a solution, and Django already implements
it. It involves escaping content, and has nothing to do with CSRF.
In the case of Mezzanine, they've fixed the problem by implementing a
'cleansing' process - i.e., still accepting the content as 'safe', but
post-processing it to make sure that it really *is* safe, by stripping
out <script> tags or anything else that might provide an injection
channel.
While I can fully understand why Stephen has taken this approach for
Mezzanine, I'm not convinced it's a good answer in the general case.
CMS solutions are an edge case -- they actually *want* to accept HTML
content from the end user, so that it can be rendered.
The problem with cleansing is that it doesn't fix the problem -- it
just narrows the attack window. Ok, so lets say your cleanser removes
<script> tags; that's fixed one obvious way to inject. But what about
<a href="…" onclick="alert('hello')"> That's still javascript content
that could be used for an attack; your attacker just needs to socially
engineer the user to click on the link. So, you update your cleaners
to strip onclick attributes -- at which point, the attacker finds a
new way to inject, or they find a bug in your cleansing library, or
they find the one input field on your site that you accidentally
forgot to cleanse… you're now engaged in an arms race with your
attackers.
The default Django position of "don't *ever* trust user content" is
ultimately the safest approach, which is why Django implements it.
Django does provide a way to disable that protection, but it really
should be done as a last resort.
That said -- we're always open to suggestions. If anyone has any good
ideas for preventing injection attacks (or any other type of attack,
for that matter), let us know. You can't have enough out-of-the-box
security.
Yours,
Russ Magee %-)
--
You received this message because you are subscribed to the Google Groups "Django users" group.
To post to this group, send email to django-users@googlegroups.com.
To unsubscribe from this group, send email to django-users+unsubscribe@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/django-users?hl=en.
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home