[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: MEMORY credential cache interop between Heimdal and MIT?



At this point we're looking for volunteers, not more wishes, but  
here's a wish:

Instead of always going up the tree visiting all parents, have some  
way to "stop" so you can securely implement PAG semantics.  I don't  
think I'd use it often, but I like the idea of being able to set up  
an "admin" window and a "secure sandbox" window with more/less  
privileges than my default login session.

I would think the AFS folks would be interested in seeing the  
Kerberos ticket cache scope match the scope of PAG's as well as  
having a PAG implementation that wasn't so dependent on OS-specific  
hackery.  I'm not sure this is easier than what they do now, but if  
it gets AFS and Kerberos on the same page, that's a good thing.

On Aug 22, 2007, at 10:21 AM, Michael B Allen wrote:

> [removing some addresses that have been inactive]
>
> On Wed, 22 Aug 2007 08:52:06 -0400
> Ken Raeburn <raeburn@MIT.EDU> wrote:
>
>> On Aug 16, 2007, at 16:51, Michael B Allen wrote:
>>> On Thu, 16 Aug 2007 15:03:24 -0400
>>> Jeffrey Altman <jaltman@secure-endpoints.com> wrote:
>>>
>>>> Michael:
>>>>
>>>> Have you examined the krb5_cc_xxx API that both MIT and Heimdal
>>>> implement?
>>>>
>>>> If krb5_cc_register() was exported, would that satisfy your
>>>> requirement?
>>>>
>>>> It would permit you to add any credential cache implementation of
>>>> your
>>>> choice to the library at run-time.
>>>
>>> Hi Jeffrey,
>>>
>>> That wouldn't work. The krb5_cc_register function would only  
>>> register
>>> cc ops with the implementation with which you're linked [1]. So if
>>> your program is linked with Heimdal and it called a cURL library  
>>> that
>>> was linked with MIT, the krb5_cc_register call will have no effect
>>> on the ccache code used by cURL. And even if you could call the  
>>> other
>>> implementation's krb5_cc_register using some crazy dlopen  
>>> trickery the
>>> internal structures are not the same.
>>
>> The idea I had (which I guess I didn't outline well) was to either
>> use dlopen so you can independently access both implementations, or
>> create multiple shared objects, for example:
>>
>> obj1.so
>>    implements a cache
>>    links against obj2.so and obj3.so
>>    library init function calls register_cache_with_mit,
>> register_cache_with_heimdal
>> obj2.so
>>    links against MIT code
>>    implements register_cache_with_mit
>> obj3.so
>>    links against Heimdal code
>>    implements register_cache_with_heimdal
>>
>> Then set LD_PRELOAD=/path/to/obj1.so, or link the application against
>> it.  Unless the dynamic linker loads multiple copies of the
>> libraries, this ought to get you a shared credential cache between
>> the implementations for that process.  The core implementation can
>> define its own data structures, and the methods for each
>> implementation can do the translation.
>>
>> However you do it, you'd probably wind up wanting to compile multiple
>> object files anyways, to avoid confusion between the MIT and Heimdal
>> type names and such.  Though you could simplify it from the above,
>> merging either obj2.so or obj3.so into obj1.so, for example.  Or
>> linking a non-shared obj1.o directly into the application instead of
>> as a shared library object.  Et cetera....
>>
>> Kind of ugly, but it would get the originally requested functionality
>> with today's released libraries.
>
> Hi Ken,
>
> I think that the ccache plugin idea is a worthwhile project. Yes, I
> think it would solve Alf's original issue. But by itself it would not
> solve the shared storage or access control issues (access control  
> being
> what I am really interested in).
>
> The only way to ensure that the ccache is truly protected is with a
> kernel extension. I think I would rather invest time into a solid long
> term solution and I think a secure shared storage kernel extensions
> project would be a step in the right direction.
>
> The extension could be quite simple. The caller could open a file that
> and do an ioctl something roughly like:
>
>   int fd = open("/dev/sss0", flags)
>   ioctl(fd, req, "krb5cc[uid=1234,ppid=5678]")
>   FILE *ccachefp = fdopen(fd, mode)
>
> So the kernel extension could be a simple device file implementation
> (this should handle all of the *nix systems). The ioctl data
> "krb5cc[uid=1234,ppid=5678]" indicates the name of the storage and
> some access control parameters. If the storage is created vs opened
> the access control parameters are set. The uid indicates that the  
> named
> ccache is specific to processes with that uid. The ppid indicates that
> only processes with that pid or a descendant of that pid (i.e. the  
> check
> would simply walk up the parent pids of the current process until it
> matched the supplied ppid) should have access to the storage.
>
> Now if there's some young buck out there looking for an excuse to
> experiment with kernel extensions, here's your chance for glory!
>
> Mike

------------------------------------------------------------------------
The opinions expressed in this message are mine,
not those of Caltech, JPL, NASA, or the US Government.
Henry.B.Hotz@jpl.nasa.gov, or hbhotz@oxy.edu