comp.lang.c - 22 new messages in 4 topics - digest
comp.lang.c
http://groups.google.com/group/comp.lang.c?hl=en
Today's topics:
* a gift for the mortensens - 9 messages, 4 authors
http://groups.google.com/group/comp.lang.c/t/1911197d173cd869?hl=en
* Garbage - 2 messages, 2 authors
http://groups.google.com/group/comp.lang.c/t/e540ba3bcd7a281f?hl=en
* Comparision of C Sharp and C performance - 6 messages, 3 authors
http://groups.google.com/group/comp.lang.c/t/4cf78a2afa73b77a?hl=en
* arithmetic on a void * pointer - 5 messages, 3 authors
http://groups.google.com/group/comp.lang.c/t/451b17d19dcc5236?hl=en
==============================================================================
TOPIC: a gift for the mortensens
http://groups.google.com/group/comp.lang.c/t/1911197d173cd869?hl=en
==============================================================================
== 1 of 9 ==
Date: Mon, Jan 11 2010 8:29 pm
From: frank
Keith Thompson wrote:
> frank <frank@example.invalid> writes:
>> Barry Schwarz wrote:
>>> On Sun, 10 Jan 2010 16:05:14 -0700, frank <frank@example.invalid>
>>> wrote:
>>>> dan@dan-desktop:~/source$ gcc -std=c99 -Wall -Wextra mort1.c -o out; ./out
>>>> i is 1337295409
>>>> i is 2147483647
>>> One of these statements must be false.
>> They are not simultaneously, but sequentially true.
>
> As I already pointed out and you acknowledged, the second "i is" was a
> typo for "RAND_MAX is". i never takes on the value 2147483647, except
> perhaps by coincidence. As printed, the first statement is true, the
> second is false.
Now that I think about it, you are absolutely correct. i assumes one
value in the program and hence cannot be two. I think I was injecting
less belief in the "i is " part than a literal one. A longer version of
it may have been better to read "The integer I'm looking for is ".
>
>>>> c is 1
>>> While that is the character representation of low order byte on an
>>> ASCII machine, my EBCDIC system will produce significantly different
>>> output.
>> I thought they were the same for the first 128 elements, and that
>> ascii filled out 129-256, while ebcdic was size 128.
>
> Nope. ASCII is a 7-bit code with codes 0-127 (typically stored in an
> 8-bit byte). EBCDIC is an 8-bit code, mostly inconsistent with ASCII.
>
> Google is your friend.
I swear I've seen it differently, but I doubt you'd write that if it
weren't demonstrable.
>
>> snip
>>>> c=(char)i;
>>> Does your system complain without the cast? If this code is executed
>>> on a system where char is signed, the cast may not produce the desired
>>> value and may not produce any value.
>> Why would a person want to have a signed char? I've never used one,
>> except errantly.
>
> Historical reasons, mostly. The point is that, on many modern
> implementations, very likely including the one you're using, plain
> char is a signed type.
>
> Try printing the values of CHAR_MIN, CHAR_MAX, SCHAR_MIN, SCHAR_MAX,
> and UCHAR_MAX (defined in <limits.h>).
For something like this, I don't want to write a program; I want to look
at limits.h for my implementation. I've probably asked you three times
for this over the past couple years, but I keep getting sent back to go
in a lot of ways, as I work up my linux install for the 4th time. What
is the name of the newsgroup specific to gcc?
--
frank
== 2 of 9 ==
Date: Mon, Jan 11 2010 9:15 pm
From: frank
Keith Thompson wrote:
> frank <frank@example.invalid> writes:
>> Keith Thompson wrote:
>>> frank <frank@example.invalid> writes:
>> [snipped and reordered for thematic reasons]
>>
>>> None of the three casts in your program are necessary, and IMHO your
>>> code would be improved by dropping them.
>>>
>>> srand(time(NULL);;
>>> ...
>>> c = i;
>> This seems to work (with a right paren added and semi-colon removed):
>
> Oops, typo on my part.
I think "we" should update the FAQ to replace your expression with the
one that Steve Summit had. People like me, who read his collection
while sitting in a parking lot, are always grateful for his
contribution, but useless casts make code unreadable.
>
> [...]
>> #include <stdio.h>
>> #include <stdlib.h>
>> #include <time.h>
>>
>> #define N 26
>>
>> int
>> main (void)
>> {
>> int i;
>> char c;
>>
>> srand(time(NULL));
>> printf ("RAND_MAX is %d\n", RAND_MAX);
>> i = (int) ((double) rand () / ((double) RAND_MAX + 1) * N);
>> printf ("i is %d\n", i);
>>
>> return 0;
>> }
>>
>> // gcc -std=c99 -Wall -Wextra mort2.c -o out; ./out
>> dan@dan-desktop:~/source$
>>
>> So none of those casts were doing anything for me? If so, I say we
>> replace this part of the FAQ.
>
> No, that's not what I said. None of the casts in your previous code
> were necessary. I obviously wasn't commenting on code you hadn't
> posted yet.
Yeah, I didn't do the best editing here.
>
> In your new code:
>
> i = (int) ((double) rand () / ((double) RAND_MAX + 1) * N);
>
> the cast to int is unnecessary, since the result is being assigned to
> an int object. The other two casts are necessary and appropriate,
> since in their absence the int values wouldn't be converted to double.
Still curious about this.
> Indentation?
>>>
> [52 lines deleted]
>> It took less than a minute.
>
> Great. Though I'm not quite sure why you felt the need to tell us, in
> great detail, how you did it. Just posting properly indented code is
> more than enough.
>
http://clc-wiki.net/wiki/clc-wiki:Policies#codeformat
This is topical, according to the wiki. I didn't explain the details
but will now take the opportunity to do so. MY OS (ubuntu) registered
that I wanted indent and gave me the the line that I needed to make it
happen off the command line. Then I hit the up arrow twice to find the
command that had failed, and voila.
--
frank
== 3 of 9 ==
Date: Mon, Jan 11 2010 9:59 pm
From: Barry Schwarz
On Mon, 11 Jan 2010 15:14:23 -0800, Keith Thompson <kst-u@mib.org>
wrote:
>frank <frank@example.invalid> writes:
snip
>> #include <stdio.h>
>> #include <stdlib.h>
>> #include <time.h>
>>
>> #define N 26
>>
>> int
>> main (void)
>> {
>> int i;
>> char c;
>>
>> srand(time(NULL));
>> printf ("RAND_MAX is %d\n", RAND_MAX);
>> i = (int) ((double) rand () / ((double) RAND_MAX + 1) * N);
>> printf ("i is %d\n", i);
>>
>> return 0;
>> }
>>
>> // gcc -std=c99 -Wall -Wextra mort2.c -o out; ./out
>> dan@dan-desktop:~/source$
>>
>> So none of those casts were doing anything for me? If so, I say we
>> replace this part of the FAQ.
>
>No, that's not what I said. None of the casts in your previous code
>were necessary. I obviously wasn't commenting on code you hadn't
>posted yet.
>
>In your new code:
>
> i = (int) ((double) rand () / ((double) RAND_MAX + 1) * N);
>
>the cast to int is unnecessary, since the result is being assigned to
>an int object. The other two casts are necessary and appropriate,
>since in their absence the int values wouldn't be converted to double.
Only the second cast to double is necessary. Once RAND_MAX is
converted to double, all the remaining values in the expression must
also be converted. While the first cast to double might achieve the
same effect, the denominator would be evaluated as an int first and
could overflow before being converted to double.
--
Remove del for email
== 4 of 9 ==
Date: Mon, Jan 11 2010 9:59 pm
From: Barry Schwarz
On Mon, 11 Jan 2010 16:26:52 -0700, frank <frank@example.invalid>
wrote:
>Barry Schwarz wrote:
>> On Sun, 10 Jan 2010 16:05:14 -0700, frank <frank@example.invalid>
>> wrote:
>
>>> dan@dan-desktop:~/source$ gcc -std=c99 -Wall -Wextra mort1.c -o out; ./out
>>> i is 1337295409
>>> i is 2147483647
>>
>> One of these statements must be false.
>
>They are not simultaneously, but sequentially true.
>>
>>> c is 1
>>
>> While that is the character representation of low order byte on an
>> ASCII machine, my EBCDIC system will produce significantly different
>> output.
>
>I thought they were the same for the first 128 elements, and that ascii
>filled out 129-256, while ebcdic was size 128.
The character '1' on an ASCII system is 0x31. On an EBCDIC system it
is 0xF1. ASCII 'A' is 0x41, EBCDIC is 0xC1. Worse, on an ASCII
system, 'J'-'I' is 1; in EBCDIC it is 8.
About the only character in common is '\0' which is 0x00 on both.
--
Remove del for email
== 5 of 9 ==
Date: Mon, Jan 11 2010 10:02 pm
From: Richard Heathfield
Barry Schwarz wrote:
> On Mon, 11 Jan 2010 15:14:23 -0800, Keith Thompson <kst-u@mib.org>
> wrote:
<snip>
>> In your new code:
>>
>> i = (int) ((double) rand () / ((double) RAND_MAX + 1) * N);
>>
>> the cast to int is unnecessary, since the result is being assigned to
>> an int object. The other two casts are necessary and appropriate,
>> since in their absence the int values wouldn't be converted to double.
>
> Only the second cast to double is necessary.
Not even the second cast is necessary.
i = (rand() / (RAND_MAX + 1.0)) * N;
<snip>
--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
== 6 of 9 ==
Date: Mon, Jan 11 2010 10:41 pm
From: Keith Thompson
frank <frank@example.invalid> writes:
> Ben Bacarisse wrote:
[...]
>> To get a floating-point number in [0, 1) I have taken to writing:
>>
>> nextafter((double)rand() / RAND_MAX, 0)
>>
>> nextafter is a C99 function that gives the next representable number,
>> near the first argument in the direction of the second. There are
>> probably better ways to do this, but the best of all would be a
>> floating-point random function in C. Such a function could rely on
>> the internal representation of a floating point number to give a
>> properly uniform distribution. Many C libraries include such a
>> function as an extension.
>
> Is gcc one of them?
No, since gcc is a compiler not a library. (glibc is the library
most commonly associated with gcc, but in fact gcc is generally
used with whatever library exists on the system.)
[...]
--
Keith Thompson (The_Other_Keith) kst-u@mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
== 7 of 9 ==
Date: Mon, Jan 11 2010 10:51 pm
From: Keith Thompson
frank <frank@example.invalid> writes:
> Keith Thompson wrote:
>> frank <frank@example.invalid> writes:
>>> Barry Schwarz wrote:
>>>> On Sun, 10 Jan 2010 16:05:14 -0700, frank <frank@example.invalid>
>>>> wrote:
[...]
>>>>> i is 1337295409
>>>>> i is 2147483647
>>>> One of these statements must be false.
>>> They are not simultaneously, but sequentially true.
>>
>> As I already pointed out and you acknowledged, the second "i is" was a
>> typo for "RAND_MAX is". i never takes on the value 2147483647, except
>> perhaps by coincidence. As printed, the first statement is true, the
>> second is false.
>
> Now that I think about it, you are absolutely correct. i assumes one
> value in the program and hence cannot be two. I think I was injecting
> less belief in the "i is " part than a literal one. A longer version
> of it may have been better to read "The integer I'm looking for is ".
So you were using "i" to refer generically to whatever integer you're
looking at the moment, while your program declares an integer object
named "i". That's, um, a very interesting way of looking at things.
Really, given that you wanted to display the values of i and RAND_MAX,
the only sensible thing to write would be:
printf("i is %d\n", i);
printf("RAND_MAX is %d\n", RAND_MAX);
or some variation.
[...]
>>> Why would a person want to have a signed char? I've never used one,
>>> except errantly.
>>
>> Historical reasons, mostly. The point is that, on many modern
>> implementations, very likely including the one you're using, plain
>> char is a signed type.
>>
>> Try printing the values of CHAR_MIN, CHAR_MAX, SCHAR_MIN, SCHAR_MAX,
>> and UCHAR_MAX (defined in <limits.h>).
>
> For something like this, I don't want to write a program; I want to
> look at limits.h for my implementation.
Ok, nobody's stopping you. But why? The standard headers aren't
generally written to be particularly human-readable. On my system,
for example, I see the following in /usr/include/limits.h:
/* Minimum and maximum values a `char' can hold. */
# ifdef __CHAR_UNSIGNED__
# define CHAR_MIN 0
# define CHAR_MAX UCHAR_MAX
# else
# define CHAR_MIN SCHAR_MIN
# define CHAR_MAX SCHAR_MAX
# endif
I can guess where __CHAR_UNSIGNED__ would be defined, but I'm not sure
(plain char is signed on my system).
For that matter, the standard headers aren't necessarily even
implemented as source files.
Writing and executing a program is the only reliable way to display
the values to which these macros expand.
> I've probably asked you three
> times for this over the past couple years, but I keep getting sent
> back to go in a lot of ways, as I work up my linux install for the 4th
> time. What is the name of the newsgroup specific to gcc?
You're probably looking for gnu.gcc.help.
--
Keith Thompson (The_Other_Keith) kst-u@mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
== 8 of 9 ==
Date: Mon, Jan 11 2010 10:57 pm
From: Keith Thompson
frank <frank@example.invalid> writes:
> Keith Thompson wrote:
>> frank <frank@example.invalid> writes:
>>> Keith Thompson wrote:
>>>> frank <frank@example.invalid> writes:
>>> [snipped and reordered for thematic reasons]
>>>
>>>> None of the three casts in your program are necessary, and IMHO your
>>>> code would be improved by dropping them.
>>>>
>>>> srand(time(NULL);;
>>>> ...
>>>> c = i;
>>> This seems to work (with a right paren added and semi-colon removed):
>>
>> Oops, typo on my part.
>
> I think "we" should update the FAQ to replace your expression with the
> one that Steve Summit had.
I think you meant that the other way around.
> People like me, who read his collection
> while sitting in a parking lot, are always grateful for his
> contribution, but useless casts make code unreadable.
What useless casts? There were several in the earlier code you
posted; there aren't very many in the FAQ. In question 13.16, we see:
(int)((double)rand() / ((double)RAND_MAX + 1) * N)
and
(int)(drand48() * N)
The double casts *are* necessary. The int casts may or may not be,
depending on what's done with the result.
[...]
>> In your new code:
>>
>> i = (int) ((double) rand () / ((double) RAND_MAX + 1) * N);
>>
>> the cast to int is unnecessary, since the result is being assigned to
>> an int object. The other two casts are necessary and appropriate,
>> since in their absence the int values wouldn't be converted to double.
>
> Still curious about this.
About what? I don't know what you're asking.
>> Indentation?
>>>>
>> [52 lines deleted]
>>> It took less than a minute.
>>
>> Great. Though I'm not quite sure why you felt the need to tell us, in
>> great detail, how you did it. Just posting properly indented code is
>> more than enough.
>
> http://clc-wiki.net/wiki/clc-wiki:Policies#codeformat
>
> This is topical, according to the wiki.
[l..]
My remark was more about verbosity than about topicality, but arguing
the point any further would just be ironic.
--
Keith Thompson (The_Other_Keith) kst-u@mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
== 9 of 9 ==
Date: Mon, Jan 11 2010 11:02 pm
From: Keith Thompson
Richard Heathfield <rjh@see.sig.invalid> writes:
> Barry Schwarz wrote:
>> On Mon, 11 Jan 2010 15:14:23 -0800, Keith Thompson <kst-u@mib.org>
>> wrote:
>
> <snip>
>
>>> In your new code:
>>>
>>> i = (int) ((double) rand () / ((double) RAND_MAX + 1) * N);
>>>
>>> the cast to int is unnecessary, since the result is being assigned to
>>> an int object. The other two casts are necessary and appropriate,
>>> since in their absence the int values wouldn't be converted to double.
>>
>> Only the second cast to double is necessary.
>
> Not even the second cast is necessary.
>
> i = (rand() / (RAND_MAX + 1.0)) * N;
Nicely done!
In cases like this, though, I think I'd argue that using casts or not
is largely a matter of taste. I find that I'm a little uncomfortable
with the way the constant 1.0 imposes its type on the rest of the
expression, bubbling up through multiple levels of the tree.
And I suppose that's inconsistent with most of what I've said about
using casts where implicit conversions would do the same job. Oh,
well.
--
Keith Thompson (The_Other_Keith) kst-u@mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
==============================================================================
TOPIC: Garbage
http://groups.google.com/group/comp.lang.c/t/e540ba3bcd7a281f?hl=en
==============================================================================
== 1 of 2 ==
Date: Mon, Jan 11 2010 8:38 pm
From: James Dow Allen
I have a certain sympathy for an "Anti-pedantry" position but ...
On Jan 12, 3:59 am, Antoninus Twink <nos...@nospam.invalid> wrote:
> On 11 Jan 2010 at 18:20, James Dow Allen wrote:
> > I've left r.g.b in the Newsgroup list so r.g.b'ers can
> > know what we c.l.c'ers think of Kenny.
> .
> What makes you think you speak for all clc'ers?
> .
> Your arrogance certainly marks you out as exactly one of those
> "regulars" that Kenny is talking about.
By this point, I think I'll take that as a compliment.
James
== 2 of 2 ==
Date: Mon, Jan 11 2010 11:44 pm
From: gwowen
On Jan 11, 8:59 pm, Antoninus Twink <nos...@nospam.invalid> wrote:
> Your arrogance certainly marks you out as exactly one of those
> "regulars" that Kenny is talking about.
Oh please, Sockpuppet. Your opinion on yourself is not enlightening.
==============================================================================
TOPIC: Comparision of C Sharp and C performance
http://groups.google.com/group/comp.lang.c/t/4cf78a2afa73b77a?hl=en
==============================================================================
== 1 of 6 ==
Date: Mon, Jan 11 2010 8:52 pm
From: spinoza1111
On Jan 12, 11:15 am, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> spinoza1111<spinoza1...@yahoo.com> writes:
>
> <snip>
>
> > If a C #define macro contains a #defined symbol, the inner symbol has
> > in fact to be translated before the outer symbol.
>
> That's wrong. Here are two examples:
>
> #define INNER1 arg
> #define OUTER1(arg) INNER1
>
> #define INNER2(x) #x
> #define OUTER2(y) INNER2(y)
>
> OUTER1(A)
> OUTER2(A)
>
> Work out what would happen if the inner macro is expanded before the
> outer one. The try then examples out.
>
> The C pre-processor does re-scanning rather than nested inside-out
> expansion.
Thanks for the clarification, if indeed this is a "clarification" in
the sense of being "clear" and therefore true. I don't have time,
right now, to work through your example. But is it fair to say that
the preprocessor rescans until all define symbols are gone? If this is
the case, can source code loop the preprocesssor? If this is the case,
does this not suck? And do you know whether there is any difference
between inside out expansion and rescanning?
Note to corporate autists: a language lawyer level knowledge of so
flawed an artifact as C is NOT competence at the only meaningful task,
which is programming and maintaining software, and of the creeps here,
only Ben seems to have this competence. Kenny and the Twink may have
it as well, and I mention them because they are not creeps. This is
because competent programmers try to avoid exposure to the thought
viruses of C by not using stupid tricks, and competent programmers
rewrite poorly written code filled with Stupid C Tricks.
Thank you for your attention.
>
> <snip>
> --
> Ben.
== 2 of 6 ==
Date: Mon, Jan 11 2010 8:53 pm
From: spinoza1111
On Jan 12, 11:03 am, Richard Heathfield <r...@see.sig.invalid> wrote:
> spinoza1111wrote:
> > On Jan 11, 11:33 pm, Seebs <usenet-nos...@seebs.net> wrote:
>
> <snip>
>
> >> If you said it, or Francis said it, I'd probably think that was obviously
> >> what was meant. Spinny, though, is sufficiently completely incapable of
> >> getting technical details right (consider that this whole subthread was
> >> inspired by his assertion that the preprocessor was doing the constant
>
> > That statement was redacted. How's that DOS heap coming along? And
> > your belief that a psychology major and a fashionable disease entitles
> > you to destroy reputations?
>
> > The point was that in effect that's what happens.
>
> No, the point was that it is important not to confuse the textual
> substitution performed by the preprocessor with the compile-time
> arithmetical simplification performed (optionally) by the translator, be
> it an interpreter or a compiler or something in between.
Yeah, I cleared this up for you, didn't I.
>
> <snip>
>
> > Those of us who have written compilers understand this.
>
> You wear that like a badge, but if your C knowledge is anything to go
> by, I wouldn't trust your compiler to compile "hello world" properly.
Stop making a fool of yourself.
>
> --
> Richard Heathfield <http://www.cpax.org.uk>
> Email: -http://www. +rjh@
> "Usenet is a strange place" - dmr 29 July 1999
> Sig line vacant - apply within
== 3 of 6 ==
Date: Mon, Jan 11 2010 8:55 pm
From: cri@tiac.net (Richard Harter)
On Mon, 11 Jan 2010 17:38:53 -0800 (PST), spinoza1111
<spinoza1111@yahoo.com> wrote:
>Part of education is getting rid of the habit of reification, and the
>understanding that CONCEPTS are not THINGS (especially not animals:
>this is a regression to childhood). People are diverted into crappy
>but apparently well paid jobs as "computer programmers" and wind up
>using childish metaphors unaware that they are metaphors, but some of
>us who were so diverted know this.
You were wise to abandon philosophy.
Richard Harter, cri@tiac.net
http://home.tiac.net/~cri, http://www.varinoma.com
Infinity is one of those things that keep philosophers busy when they
could be more profitably spending their time weeding their garden.
== 4 of 6 ==
Date: Mon, Jan 11 2010 9:03 pm
From: spinoza1111
On Jan 12, 10:54 am, Richard Heathfield <r...@see.sig.invalid> wrote:
> spinoza1111wrote:
> > Here's who you're calling ignorant:
>
> I'm not calling anyone ignorant. Learn to read for comprehension.
Stop making a fool of yourself. I swear to God I'm gonna write you a
sonnet on that theme.
Here's what you said about Peter Neumann: "the moderator of
comp.risks, whom I was crediting with good sense - apparently
mistakenly. I'm not even remotely interested in what ignorant people
believe about you."
Don't make total fool of thy sweet self
Thou art already mostly fashion'd fool:
With pomposity and pretense you exhaust pelf:
Perhaps you need to go back to school.
And learn there in fool school manners
Such breeding as can be taught you,
How to write parsers and also scanners,
And how to wipe your arse after you poo.
Such a Borstal you deserve, dearest Dick
Not the victor's crown or laurel so green:
The loser's lousy lot forever you will lick
Until you cultivate sensitivity and understanding.
Let this indeed be a lesson unto ye
Stop causing this thread such unspeakable Misery.
>
> --
> Richard Heathfield <http://www.cpax.org.uk>
> Email: -http://www. +rjh@
> "Usenet is a strange place" - dmr 29 July 1999
> Sig line vacant - apply within
== 5 of 6 ==
Date: Mon, Jan 11 2010 9:11 pm
From: spinoza1111
On Jan 12, 12:55 pm, c...@tiac.net (Richard Harter) wrote:
> On Mon, 11 Jan 2010 17:38:53 -0800 (PST),spinoza1111
>
> <spinoza1...@yahoo.com> wrote:
> >Part of education is getting rid of the habit of reification, and the
> >understanding that CONCEPTS are not THINGS (especially not animals:
> >this is a regression to childhood). People are diverted into crappy
> >but apparently well paid jobs as "computer programmers" and wind up
> >using childish metaphors unaware that they are metaphors, but some of
> >us who were so diverted know this.
>
> You were wise to abandon philosophy.
WTF? I didn't. The head of the department asked me after I got my BA,
and with no plans to go to graduate school, to teach philosophy.
But after programming thereafter for ten years, I woke up in a sort of
Anbar Awakening, and saw for the first time the dull souls of my
coworkers. I became an autodidact but took whatever opportunity I
could to Get Smart, including bypassing higher-paid Korporate jobs for
a job at Princeton University.
But you can read the story in my published book, since Apress allowed
me to put in biographical details while discussing how to build a
compiler in an amusing fashion.
Programming was merely for me a draft-dodging scheme that got outa
hand (the draft didn't end until 1973 and I took my first computer
science class in 1970). But one gets interested in reified crap.
>
> Richard Harter, c...@tiac.nethttp://home.tiac.net/~cri,http://www.varinoma.com
> Infinity is one of those things that keep philosophers busy when they
> could be more profitably spending their time weeding their garden.
== 6 of 6 ==
Date: Mon, Jan 11 2010 9:41 pm
From: Richard Heathfield
spinoza1111 wrote:
> On Jan 12, 10:54 am, Richard Heathfield <r...@see.sig.invalid> wrote:
>> spinoza1111wrote:
>>> Here's who you're calling ignorant:
>> I'm not calling anyone ignorant. Learn to read for comprehension.
>
> Stop making a fool of yourself. I swear to God I'm gonna write you a
> sonnet on that theme.
>
> Here's what you said about Peter Neumann: "the moderator of
> comp.risks, whom I was crediting with good sense - apparently
> mistakenly. I'm not even remotely interested in what ignorant people
> believe about you."
I don't expect you to understand that paragraph. In fact, at this stage
I can hardly bring myself to expect you to understand the alphabet.
--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
==============================================================================
TOPIC: arithmetic on a void * pointer
http://groups.google.com/group/comp.lang.c/t/451b17d19dcc5236?hl=en
==============================================================================
== 1 of 5 ==
Date: Mon, Jan 11 2010 10:17 pm
From: Seebs
On 2010-01-12, Richard Heathfield <rjh@see.sig.invalid> wrote:
> The point of void* is not to be a mere synonym for char*, but to allow
> us to abstract away the differences between types when performing
> operations on objects for which their type is immaterial, or to allow us
> to process them so far, and then hand on their type-specific part to a
> function that knows about their type. Obvious examples are mem*, qsort,
> bsearch, fread, fwrite, and abstract container libraries.
In particular:
I find that it's *USEFUL* to get warned if I try to do arithmetic on
a (void *), because it means I think I know what I have a pointer to, and
that either I should change the type of the pointer to reflect what I know
it points to, or I don't know enough to do that arithmetic.
If I want a buffer of unsigned chars, I know where to find it.
-s
--
Copyright 2010, all wrongs reversed. Peter Seebach / usenet-nospam@seebs.net
http://www.seebs.net/log/ <-- lawsuits, religion, and funny pictures
http://en.wikipedia.org/wiki/Fair_Game_(Scientology) <-- get educated!
== 2 of 5 ==
Date: Mon, Jan 11 2010 10:45 pm
From: spinoza1111
On Jan 12, 11:33 am, Richard Heathfield <r...@see.sig.invalid> wrote:
> spinoza1111 wrote:
> > But congratulations to the growing number of people
> > who are calling Heathfield on his BS.
>
> Firstly, it's not BS - but I realise you aren't able to recognise that.
> Secondly, let's count them, shall we? Six months ago, it was five: you,
> Kenny, Richard NoName, Twink, and Han - all except you being widely
> recognised as trolls. Judging by the lack of feedback, Han appears to
> have gone, so it seems that either the number has actually reduced to
> four or everybody simultaneously killfiled Han some months ago. And if
> we reduce the number to those who actually post under their real name,
> that number shrinks to one. And, unless their posting behaviour has
> altered dramatically since they hit my killfile, none of the above has
> anything worthwhile to say about C.
>
> If one's companions are an indication of one's credibility, you couldn't
> have made a worse choice as far as C programming is concerned. And I'm
> beginning to see that Keith is right. A cost/benefit analysis suggests
> that discussing anything with you is far too expensive, since you are
> too stupid and arrogant to learn quickly. You assume you know it all, as
> a result of which it takes ages to teach you even the simplest thing,
> because you're too busy denying it to bother learning it. And even when
> (or rather if) the knowledge does finally break through, you then claim
> you knew it all along (despite having argued for its falsity).
>
> The rest of this article addresses the original subject of the thread,
> "arithmetic on a void * pointer".
>
> It may well be that the OP's attitude to void * stems from what might be
> called a "pigeon-hole" approach to programming - memory is a bunch of
> pigeon-holes, and a program is just a list of instructions for visiting
> them (and modifying their contents) in a particular, dynamic, order.
> This very low-level approach is absolutely fine and highly justifiable
> for some people (especially writers of object code!). One can see why
> such a person might have little patience with restrictions on pointer
> arithmetic (and indeed with other forms of abstraction).
>
> For those who have a higher level approach, however, the abstract
> machine (AM) comes to the fore - and the AM doesn't define arithmetic on
> void pointers because it can't be sure of the outcome of such arithmetic
> on the real machines on which the AM sits.
>
> The Standard's definition of p++ can be thought of as "increment p by
> sizeof *p" - i.e. if p's object representation is 0xF000 and it's a
> pointer to an int where int is known to be 6 bytes, then we should not
> be entirely surprised if p's object representation changes to 0xF006.
> Now, if p is a void *, then we're asking for p to be incremented by the
> value of sizeof(void), which is obviously meaningless, since void is an
> incomplete type that cannot be completed and so there is no way to know
> how big it is.
>
> The point of void* is not to be a mere synonym for char*, but to allow
> us to abstract away the differences between types when performing
> operations on objects for which their type is immaterial, or to allow us
> to process them so far, and then hand on their type-specific part to a
> function that knows about their type. Obvious examples are mem*, qsort,
> bsearch, fread, fwrite, and abstract container libraries.
>
> If the OP can't see the point of void *, it may be that this is simply a
> facet of the way he approaches programming - in other words, maybe for
> him there isn't any point to it. It is certainly true that, for his
> compression needs, he doesn't actually need void * - he could easily
> make do with unsigned char * instead.
Pompous and content-free, since if you were really interested in
abstraction and type safety you would not use C. You would learn an
object oriented language, but you refuse to be vulnerable and to
learn.
The original poster is in fact a competent C programmer. You're not.
Object oriented languages allow both "incomplete classes" and
"abstract classes" and they separate these notions, whereas void is
neither.
In C Sharp, for example, an incomplete class definition is simply part
of its source code. An abstract class is one that cannot be
instantiated but must be inherited.
But void is-not inherited in any meaningful sense in C. You cannot say
than an int is-a void with additional properties and additional
restrictions.
Which means in C that the GCC option makes perfect sense and provides
the ability to do arithmetic on pure pointers which point to sod-all.
This is in fact the world of assembler: Never-Never land, a dream time
when programmers had Fun as opposed to the corporate reality of today,
where they, like Seebach, have to send bugs to Timbuktoo for fixing
and at best write tools and scripts that nobody asked them to write to
keep busy, as I myself have done at more than one job...because
there's no competitive advantage to be had, any more, from new and
risky software.
C programmers are Peter Pans who want to be simultaneously recognized
as grownups interested in Grownup things like reliability and
portability, but at the same time to have Fun in a time-less Never-
Never land where they can fantasize that they are close to the
machine.
But my Dad asked me in 1971 a very good question. He said, what will
happen when you programmers are done with your work? I had no answer
and the reality today is 12% unemployment and mass homelessness, even
among former programmers.
But JM Barrie's Peter Pan not only comes to mind. Another book that
comes to mind in these discussions is Lord of the Flies, because the
bullies in that book pass a law: we must have fun. The goal of
Heathfield et al. is to have fun, if necessary at the reputation of
solid professionals like Peter Neumann, Herb Schildt, and Jacob Navia.
Anyone who forms his own correct view and is able, precisely because
he's done the real homework, puts it into his own words, abandoning
the shibboleths of the Lost Boys here, is a Piggy and a spoil sport
who must die.
Most of you creeps flunked English, which is part of your problem, so
at this point I can see you saying WTF.
>
> --
> Richard Heathfield <http://www.cpax.org.uk>
> Email: -http://www. +rjh@
> "Usenet is a strange place" - dmr 29 July 1999
> Sig line vacant - apply within
== 3 of 5 ==
Date: Mon, Jan 11 2010 10:49 pm
From: spinoza1111
On Jan 12, 11:33 am, Richard Heathfield <r...@see.sig.invalid> wrote:
> spinoza1111 wrote:
> > But congratulations to the growing number of people
> > who are calling Heathfield on his BS.
>
> Firstly, it's not BS - but I realise you aren't able to recognise that.
> Secondly, let's count them, shall we? Six months ago, it was five: you,
> Kenny, Richard NoName, Twink, and Han - all except you being widely
> recognised as trolls. Judging by the lack of feedback, Han appears to
> have gone, so it seems that either the number has actually reduced to
> four or everybody simultaneously killfiled Han some months ago. And if
> we reduce the number to those who actually post under their real name,
> that number shrinks to one. And, unless their posting behaviour has
> altered dramatically since they hit my killfile, none of the above has
> anything worthwhile to say about C.
>
> If one's companions are an indication of one's credibility, you couldn't
> have made a worse choice as far as C programming is concerned. And I'm
> beginning to see that Keith is right. A cost/benefit analysis suggests
> that discussing anything with you is far too expensive, since you are
> too stupid and arrogant to learn quickly. You assume you know it all, as
> a result of which it takes ages to teach you even the simplest thing,
> because you're too busy denying it to bother learning it. And even when
> (or rather if) the knowledge does finally break through, you then claim
> you knew it all along (despite having argued for its falsity).
>
> The rest of this article addresses the original subject of the thread,
> "arithmetic on a void * pointer".
>
> It may well be that the OP's attitude to void * stems from what might be
> called a "pigeon-hole" approach to programming - memory is a bunch of
> pigeon-holes, and a program is just a list of instructions for visiting
> them (and modifying their contents) in a particular, dynamic, order.
> This very low-level approach is absolutely fine and highly justifiable
> for some people (especially writers of object code!). One can see why
> such a person might have little patience with restrictions on pointer
> arithmetic (and indeed with other forms of abstraction).
>
> For those who have a higher level approach, however, the abstract
> machine (AM) comes to the fore - and the AM doesn't define arithmetic on
> void pointers because it can't be sure of the outcome of such arithmetic
> on the real machines on which the AM sits.
>
> The Standard's definition of p++ can be thought of as "increment p by
> sizeof *p" - i.e. if p's object representation is 0xF000 and it's a
> pointer to an int where int is known to be 6 bytes, then we should not
> be entirely surprised if p's object representation changes to 0xF006.
> Now, if p is a void *, then we're asking for p to be incremented by the
> value of sizeof(void), which is obviously meaningless, since void is an
> incomplete type that cannot be completed and so there is no way to know
> how big it is.
>
> The point of void* is not to be a mere synonym for char*, but to allow
> us to abstract away the differences between types when performing
> operations on objects for which their type is immaterial, or to allow us
> to process them so far, and then hand on their type-specific part to a
> function that knows about their type. Obvious examples are mem*, qsort,
> bsearch, fread, fwrite, and abstract container libraries.
>
> If the OP can't see the point of void *, it may be that this is simply a
> facet of the way he approaches programming - in other words, maybe for
> him there isn't any point to it. It is certainly true that, for his
> compression needs, he doesn't actually need void * - he could easily
> make do with unsigned char * instead.
Not if the size of the character is larger than the byte. News flash,
Heathfield. The "smallest addressable unit" of most modern platforms
is no longer == char, because of internationalization: the wide char
IS the char in reality. Therefore Adler needs to code what he MEANS,
which is the calculation of byte and not character addresses.
>
> --
> Richard Heathfield <http://www.cpax.org.uk>
> Email: -http://www. +rjh@
> "Usenet is a strange place" - dmr 29 July 1999
> Sig line vacant - apply within
== 4 of 5 ==
Date: Mon, Jan 11 2010 10:56 pm
From: spinoza1111
On Jan 12, 11:33 am, Richard Heathfield <r...@see.sig.invalid> wrote:
> spinoza1111 wrote:
> > But congratulations to the growing number of people
> > who are calling Heathfield on his BS.
>
> Firstly, it's not BS - but I realise you aren't able to recognise that.
> Secondly, let's count them, shall we? Six months ago, it was five: you,
> Kenny, Richard NoName, Twink, and Han - all except you being widely
> recognised as trolls. Judging by the lack of feedback, Han appears to
> have gone, so it seems that either the number has actually reduced to
> four or everybody simultaneously killfiled Han some months ago. And if
> we reduce the number to those who actually post under their real name,
> that number shrinks to one. And, unless their posting behaviour has
> altered dramatically since they hit my killfile, none of the above has
> anything worthwhile to say about C.
>
> If one's companions are an indication of one's credibility, you couldn't
> have made a worse choice as far as C programming is concerned. And I'm
> beginning to see that Keith is right. A cost/benefit analysis suggests
> that discussing anything with you is far too expensive, since you are
> too stupid and arrogant to learn quickly. You assume you know it all, as
> a result of which it takes ages to teach you even the simplest thing,
> because you're too busy denying it to bother learning it. And even when
> (or rather if) the knowledge does finally break through, you then claim
> you knew it all along (despite having argued for its falsity).
>
> The rest of this article addresses the original subject of the thread,
> "arithmetic on a void * pointer".
>
> It may well be that the OP's attitude to void * stems from what might be
> called a "pigeon-hole" approach to programming - memory is a bunch of
> pigeon-holes, and a program is just a list of instructions for visiting
> them (and modifying their contents) in a particular, dynamic, order.
> This very low-level approach is absolutely fine and highly justifiable
> for some people (especially writers of object code!). One can see why
> such a person might have little patience with restrictions on pointer
> arithmetic (and indeed with other forms of abstraction).
>
> For those who have a higher level approach, however, the abstract
> machine (AM) comes to the fore - and the AM doesn't define arithmetic on
> void pointers because it can't be sure of the outcome of such arithmetic
> on the real machines on which the AM sits.
>
> The Standard's definition of p++ can be thought of as "increment p by
> sizeof *p" - i.e. if p's object representation is 0xF000 and it's a
> pointer to an int where int is known to be 6 bytes, then we should not
> be entirely surprised if p's object representation changes to 0xF006.
> Now, if p is a void *, then we're asking for p to be incremented by the
> value of sizeof(void), which is obviously meaningless, since void is an
> incomplete type that cannot be completed and so there is no way to know
> how big it is.
Nope. In verse:
The size of void is unity
We can say so with impunity
Because this ain't theology
It's JUST technology.
Throw the standard in the trash
'Twas writ to save vendor cash
Stop feigning false ignorance
Surplus to your genuine stupidity
There's no need to pretend to be a dunce
When it's clear you so clue-challenged be.
Besides (to lapse back to prose), if Adler uses a GCC compiler that
allows him to use voids as real pointers that point to the smallest
addressible unit of memory, the only problem will be when he needs to
retarget to a machine that's not supported by GCC (and GCC is both
retargetable and runs on many platforms).
>
> The point of void* is not to be a mere synonym for char*, but to allow
> us to abstract away the differences between types when performing
> operations on objects for which their type is immaterial, or to allow us
> to process them so far, and then hand on their type-specific part to a
> function that knows about their type. Obvious examples are mem*, qsort,
> bsearch, fread, fwrite, and abstract container libraries.
>
> If the OP can't see the point of void *, it may be that this is simply a
> facet of the way he approaches programming - in other words, maybe for
> him there isn't any point to it. It is certainly true that, for his
> compression needs, he doesn't actually need void * - he could easily
> make do with unsigned char * instead.
>
> --
> Richard Heathfield <http://www.cpax.org.uk>
> Email: -http://www. +rjh@
> "Usenet is a strange place" - dmr 29 July 1999
> Sig line vacant - apply within
== 5 of 5 ==
Date: Mon, Jan 11 2010 11:09 pm
From: Richard Heathfield
spinoza1111 wrote:
> On Jan 12, 11:33 am, Richard Heathfield <r...@see.sig.invalid> wrote:
<snip>
>> If the OP can't see the point of void *, it may be that this is simply a
>> facet of the way he approaches programming - in other words, maybe for
>> him there isn't any point to it. It is certainly true that, for his
>> compression needs, he doesn't actually need void * - he could easily
>> make do with unsigned char * instead.
>
> Not if the size of the character is larger than the byte.
We're talking about chars, not characters. I do not expect you to
understand the difference.
A char is guaranteed to occupy exactly one byte of storage, and the byte
is guaranteed to be at least 8 bits wide (but it can be wider).
> News flash,
> Heathfield. The "smallest addressable unit" of most modern platforms
> is no longer == char,
Olds flash: the fact that the smallest addressable unit of storage is
the byte, and one char requires exactly one byte of storage, remains
true for all conforming C implementations, even on modern platforms.
> because of internationalization: the wide char
> IS the char in reality. Therefore Adler needs to code what he MEANS,
> which is the calculation of byte and not character addresses.
Which is precisely what unsigned char * will give him.
--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
==============================================================================
You received this message because you are subscribed to the Google Groups "comp.lang.c"
group.
To post to this group, visit http://groups.google.com/group/comp.lang.c?hl=en
To unsubscribe from this group, send email to comp.lang.c+unsubscribe@googlegroups.com
To change the way you get mail from this group, visit:
http://groups.google.com/group/comp.lang.c/subscribe?hl=en
To report abuse, send email explaining the problem to abuse@googlegroups.com
==============================================================================
Google Groups: http://groups.google.com/?hl=en
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home