powered by Jive Software

Bug/Feature improvement req: SSO not picking up credentials cache on most Linux'es

Spark, by the graces of Sun’s Kerberos plugins, will not pick up on the credentials cache on most Linux’es and possibly on all.

It’ll error out, stating it can’t find the principal name.

  • , but */tmp/krb5cc_
    On a default Linux client which PAM-authenticated against a MIT Kerberos library, the created credentials cache isn’t /tmp/krb5cc{bunch of randoms}__. The name of it is then written into the KRB5CCNAME variable.

There’s a few possible ways around this:

  • I suggest Spark try to pick up this variable and use it’s contents to correctly define the credentials’ cache location.
    It is this I would kindly request be added as an issue in the tracker.

  • A quick hacky workaround would be to symlink $KRB5CCNAME to an expected location in the Spark script before Spark gets started.

An issue was created for this. SPARK-887 . PAM is what is making the random addition to the cache name, so there are several workarounds:

  • Configure PAM to not do this

  • Use a symlink to the assumed name

  • Use native GSS libraries on Solaris or Linux (32bit) which became a viable option in Java 6.

Jay, thank you. I’ll add this as a comment to the issue as well.

The problem with the cache name is that there’s pretty good reasons NOT to use a fixed on, mostly to do with multi-user systems.

Therefore I would recommend against reconfiguring PAM not to do this and building symlinks to the assumed name.

I see therefore little alternative than to use the


variable = System.getenv(“KRB5CCNAME”);

and parse that through to something usable.

On the other hand, I’ve heard (haven’t used it myself) that the Heimdal Kerberos implementation doesn’t even use files as cache anymore for security reasons.

That would leave only the option of using the native GSS libraries to completely support all configurations.

I offered the workarounds as just that, workarounds. They are not complete solutions. Most people running Spark will have little need for the extra randomness provided by PAM. The randomness is most useful to systems where multiple separate sessions are needed for the same user. So unless you are using a single large common X server with dumb terminals (which is possible, LTSP, for example), configuring pam for just logins to use the traditional location is not really a problem. Just set ccache="/krb5cc_%u" in your pam options.

Many implementations are going to memory caches since file caches are somewhat easier for root to gain control over. Though possible for root to obtain memory caches for other users, it is much more difficult.

At some point I will get this fixed, I just have no ETA right now, since its a low priority for me.