12/2 What does the following struct mean?
struct { instr_t :18, _i:1, :13; } u_i;
#define i_i u_i._i
#define getI(x) (((((uint32)(x)) << 18)) >> 31)
#define putI(x) ((((uint32)(x)) << 31) >> 18)
\_ it means you should read up on bit fields.
\_ Hi paolo!
\_ Ok, I give up. Why is this funny?
\_ Especially because paolo doesn't log in to soda.
\_ Well, not in the past week perhaps, but:
pst ttyEm 63.73.217.160 Fri Nov 22 12:32 - 12:33 (00:01)
pst ttyEH 63.73.217.160 Wed Nov 20 14:22 - 14:27 (00:05)
pst ttyCi 63.73.217.160 Tue Nov 19 20:04 - 21:35 (01:31)
pst ttyBS 63.73.217.160 Mon Nov 18 10:42 - 10:46 (00:03)
\_ yes yes that's nice. Why is this funny?
\_ The struct is composed of three members, one of
type instr_t that is 18 bits, one of type _i that
is 1 bit, and I think a signed int of 13 bits.
This might help:
http://www.cs.cf.ac.uk/Dave/C/node13.html
\_ they're commas, not semicolons. _i is the field identifier,
not the type. All three fields are of type instr_t, with
the first 18 and the last 13 bits being unnamed.
\_ As far as the macros are concerned, does the first
one return the value of the last 14 bits * 62?
I have no idea what the second one does since it
seems to deal with only the last bit.
\_ I think even the order of the bits in bitfields is up to
the compiler. So getI() only works for some compilers.
Why does he need getI() anyway? "foo = u_i._i;" will do.
--- yuen
\_ I would guess that it's for cases where that 32-bit
value got read as an integer rather than as the struct.
of course, this could have been easily solved with a
union.
\_ wha--? It's not that hard to understand. you've got a
32-bit value where you're only concerned with is the 19th
bit from the left. getI gets that bit. It shifts left 18
bits then shifts right by 31 bits (hint: there's only one
bit left). putI does the opposite; it takes a value for
that bit and places it in the right spot in the returned
value.
\_ But the problem is that you don't know whether _i is
the 19th bit from the left or from the right. You don't
know whether the 18-bits or the 13-bits are the more
significant bits. It's up to the compiler. -- yuen
\_ http://www.cs.cf.ac.uk/Dave/C/node13.html
Read Portability. They prefer shifting for
portability.
\_ shifting is preferable to the bitfields, but
I think masks would have been better than
the extra shifts and would be just as portable
(if not more so, since they wouldn't necessarily
be dependent on operating on 32-bit values.)
\_ yes. sorry, I didn't mean to imply that it wasn't
compiler-dependent. there's also the assumption
that there's an appropriate underlying type for
the uint32 typedef. my assumption is that the
code is for some specified compiler and
architecture. |