|
11/27 |
2013/4/29-5/18 [Computer/SW/Languages/C_Cplusplus, Computer/SW/Compilers] UID:54665 Activity:nil |
4/29 Why were C and Java designed to require "break;" statements for a "case" section to terminate rather than falling-through to the next section? 99% of the time poeple want a "case" section to terminate. In fact some compilers issue warning if there is no "break;" statement in a "case" section. Why not just design the languages to have termination as the default behavior, and provide a "fallthru;" statement instead? \_ C did it that way because it was easy to program -- they just \_ C did it that way because it was easy to implement -- they just used the existing break statement instead of having to program a new statement with new behavior. Java did it to match C. \_ I see. -- OP |
2012/6/13-7/20 [Computer/SW/Compilers] UID:54417 Activity:nil |
6/13 Hysterical: http://www.technologyreview.com/view/428175/ebooks-made-of-youtube-comments-invade-amazon |
11/27 |
2010/1/20-29 [Computer/SW/Compilers, Computer/SW/Languages/Misc] UID:53646 Activity:nil |
1/20 Watch out for these Chinese hackers!!! Project Aurora has infiltrated into the U.S. http://www.secureworks.com/research/blog/index.php/2010/01/20/operation-aurora-clues-in-the-code |
2009/8/31-9/9 [Computer/SW/Compilers] UID:53312 Activity:nil |
8/31 I'm trying to learn ActionScript, like a step by step tutorial. The site at http://www.actionscript.org/resources/categories/Tutorials/Flash/Beginner isn't well organized. It doesn't explain how to get started with an editor, compiler, IDE. And should I even learn AS2 when you can learn AS3? Is Adobe Flash CS4 >>> CS3 or just CS4 > CS3? |
2009/7/21-24 [Computer/SW/Compilers, Computer/SW/OS/OsX] UID:53173 Activity:kinda low |
7/21 http://lambda-the-ultimate.org/node/3522 Read the Apollo 11 code comment. Bugless? Yeah right. \_ Who claimed that that code was bugless? \_ if it was bugless then astronauts wouldn't be necessary \_ They're not necessary except to do research. We don't need them to fly anything. \_ "Houston, we've had a problem" \_ Wouldn't have been an issue at all if the astronauts weren't up there to begin with. The O2 was for them. |
2009/2/28-3/11 [Computer/SW/Compilers] UID:52661 Activity:nil |
2/28 I'm looking for a recommendation of a compiler/IDE to use to develop C/C++ code under Linux. In school, we used jove/gcc and I still use emacs/vi and gcc to this day. However, it is really lacking. Under Windows I tried Visual Studio and there were some really nice things about it, although it was so overwhelming that after 6 months of occasional use I still didn't really know what I was doing. I don't need something that powerful. I would like a visual editor that allows me to compile from within, preferably with make. If it has a debugger, too, that's great but not a requirement. I'd like something simple and easy to learn/use. It doesn't have to be free, but that's a plus. Ones I have found: Eclipse , Anjuta , KDevelop, Code::Blocks Any experiences with these or others? \_ No opinions? This is the CSUA, right? \_ I've been using emacs + gcc as my development environment since the early 90s and I don't find it lacking and have been largely frustrated by IDEs (XCode, CodeWarrior, Visual Studio, &c.), so I can't really recommend anything to this poor poster. My guess is that many people on the motd feel the same as me, hence the lack of responses. \_ emacs to edit source files then whatever IDE u want to compile \_ We use IDEA here, which is pretty good but I think Java specific. It's also a big resource hog. I tried Eclipse for a short while, it seems to do normal IDE things ok (e.g. jumping to method definitions, finding references to a method call, etc). |
2008/11/17 [Computer/SW/Compilers, Politics/Domestic] UID:52028 Activity:nil |
11/18 http://news.yahoo.com/s/ap/20081117/ap_on_re_eu/eu_britain_new_word meh. lulz. |
2008/6/13-20 [Computer/SW/Compilers, Computer/SW/OS/FreeBSD] UID:50257 Activity:nil |
6/13 Anybody know of a library that can do the following in *BSD systems? Add a function call like "if (debug) print_backtrace()" and it would print out the stack trace. Similar to setting a breakpoint in GDB and then doing "bt". Running GDB is not an option sometimes. |
2008/5/2-8 [Computer/SW/Compilers] UID:49874 Activity:low |
5/2 How do I get the L1/L2 cache size and cache line size on my machine? Can I find this stuff out at compile time somehow? \_ You aren't planning on running your code on any other processors? \_ May I ask what it is you want to achieve ultimately? If you don't know your architecture and want to find out dynamically, there are tools that can peek/poke to give you definitive answers, plus you get to see the latency of L1/L2/memory and infer a lot of info like cache associativity. Prof Saavedra has done a lot of cache benchmarks and micro measurements. -kchang tools that can peek/poke to give you definitive answers, plus you get to see the latency of L1/L2/memory and infer a lot of info like cache associativity. Prof Saavedra has done a lot of cache benchmarks and micro measurements. -kchang http://citeseer.ist.psu.edu/saavedra95measuring.html \_ No, there isn't a simple way, compile time or not. Your common user oriented desktop compiler today doesn't know anything about L1/L2 besides knowing how to do proper register allocation and in most cases don't even do a good job of spilling, knowing about cache effects, etc. Now, there is a lot of research in the past 20 years where compilers for specialized applications will optimize vector computing, tight loops, unrolling based on known monotonicity of variables, specialization, and allocate memory access patterns all based on memory locality (L1/L2), but you're not going to get that type of optimization from gcc or M$ compiler. the past 20 years where compilers will optimize vector computing and allocate memory access patterns based on L1/L2, but you're not going to get that from gcc or M$ compiler. \_ That's unfortunate. Is there some program I can run to find out? /proc/cpuinfo tells me, "cache size : 4096 KB", but doesn't give L1, or line size. Is that just the data cache? (This is an Intel Core2 ) \_ What the hell does "at compile time" mean? \_ I mean, perhaps there is some built in constant I can use that gives the L2 cache size of the machine you're compiling on. \_ You mean for a POSIX C++ compiler? Also do you not expect your code to be run on any other machine? \_ Nope, this is purely an optimization test for 1 machine. I guess if there's a way to find out the sizes dynamically that's ok too. \_ What I'm saying is that "compile time" doesn't mean shit. Compile time + what language/compiler you are using might mean something. \_ Language: C, C++, or Fortran. Not picky Compiler: gnu, intel, or portland. Not picky. OS: Linux. \_ If it's just for one machine, why don't you just look up the specs at Intel or AMD's website? \_ I think I'd have to know more about the CPU than I do. Also, it would be nice to be able to recompile it on other machines for comparison. \_ Then you probably want to look into a CPUID utilities (or roll your own simple version if you just want simple cache info), along with preprocessing of some sort. However, cache size and line size are spelled out pretty clearly in specs, so you don't have to know all that much. \_ Someone considerately overwrote my post here, so to recap, look into a cpuid utility + preprocessing. |
2007/11/30-12/6 [Computer/SW/Compilers, Computer/HW/CPU] UID:48719 Activity:moderate |
11/29 From the CSUA minutes: - Next Gen Console -- If we have $1800 in our accounts, should we buy a console: 4 votes passes. -- Console voting: 2 votes each, neither passes * 360 = 600, more games * PS3 = 650, not as many games Does this mean the CSUA already has a Wii? Since when is, "more expensive, fewer games" an argument for something? I guess if they're gonna install Linux and try some Cell development, THAT would be cool, but I don't think that's what they want it for. \_ Netrek is free.. but you need to have skills \_ I think the decision should be based on which you can hack and/or boot alternate OS's on. I think there is a clear answer here... \- YMWTS: KYELICK et al paper "The potential of the cell processor for scientific computing" on the POWER of the CELL. Interesting and quick read. Note: KYELICK now the Director of NERSC. \_ Yeah, but Roadrunner (A combo Opteron/Cell cluster proposed at Los Alamos) is still a dumb idea. \_ Why do you say that? I'd be more concerned about using /panfs as the storage system. Panasas might be ok by the time it is deployed. A lot of impressive people there, but mixed experiences in practice. \_ The Cell already has a perfectly good general processor attached to it. (A dual core power 5). What's the Opteron doing there? The last thing the Cell development tool kit needs is another totally different processor to work with. Yea! A third compiler! For hevean's sake, they don't even have the same endianness! processor to work with. Yeah! Another compiler! For hevean's sake, the don't even have the same endianness! \- ibm and amd are working together on a few things like socket compat between POWER-tng and Opteron, and Torrenza(sp?)/HTX rather than PCIe. the HPC space is very different from the rest of the world ... on a $100m computer you have a legion of programers to work on tweaking code, compilers because they are no longer dominated by "expensive programmer time costs". \_ While everything you say is true, I can't see how that excuses creating a totally wacky, needlessly difficult architecture. Even Los Alamos doesn't have infinite resources, programmer time still costs money, money Los Alamos doesn't have. Not to mention, they're buying the whole machine, whole hog. No small test prototype. On a totally untestest architecture. \- no offense intended here, but are you just reading articles on the net or do you have some experience with how large HPC procurements are done? i dont have any specific knowledge of Los Alamos/Roadrunner but two things dont ring true: 1. los alamos being on the hook for all the dev and tuning work 2. ibm just being responsible for dropping the machine off at the loading dock and being done ... the "whole machine whole hog" part. usually there are lots of partial milestones involved. although the somewhat dirty not that secret part of this is those milestones are never missed with major consequences. [well maybe once, but not with one of the main *hpc* vendors. i cant mention which well known vendor it was]. \_ Sorry, I didn't mean to imply what you've read into the 'whole hog' statement. I guess that was really poor word choice. I just meant that Los Alamos didn't buy a small prototype cluster to see how well this thing will work in production, as is normally done. I'm aware IBM has milestones and will support the cluster. |
2007/2/18-20 [Computer/SW/Languages/C_Cplusplus, Computer/SW/Compilers] UID:45772 Activity:nil |
2/18 Anyone have Richard Stallman's old .emacs config? \- why, you want to borrow some of his abbrev stuff? that about the main unique thing that was in there. well maybe some gdb stuff. --psb \_ the macros were pretty funny. can you put a copy in /tmp? ok thx. |
2006/11/10-12 [Computer/SW/Compilers] UID:45316 Activity:nil |
11/10 Is there anyway to get C/C++ compilers to automatically compile different code for different processors? I'd like to be able to say something like: #if defined X86 ... #elif defined SPARC ... #else ... If there's not standard way to have the compiler do it, is ther an easy way to have configure figure it out? \_ Most compilers have something like that. Which compiler are you using? \_ I have to support a couple. gcc and icc are the main ones. \_ So, do you know how to do it in any compiler? Please? \_ Do you mean that you want to do this without the #if statements? Or do you just want standardized OS defs? If the latter, take a look a the wxwidgets project, it's got a lot of examples of OS-specific code. \_ I don't care about OS, I care about processor architecture. x86, sparc, IA64, PowerPC. I don't really care if it's done with #ifdefs or some other way. It doesn't matter. \_ Sorry--brain fart about OS. \_ gcc at least, makes this easy for you. do a 'gcc -E' and see what defines it sets. Usually it is thinkgs like __sparc__ and __X86__ for the relevant architectures. Then you can just wrap your code in #ifdef's for those symbols. -ERic. |
2006/8/25-28 [Computer/SW/Languages, Computer/SW/Compilers] UID:44149 Activity:nil |
8/25 Why are iterators "superior" or more recently popular over the traditional method of using for loops and indexing? \_ I guess it's because you can change an array to some other data structure (linked-list, tree, ...) without changing the loop code. \_ This is a limitation of your language, not the concept of looping \_ They handle multithreaded use cases better. They hide implementation details. You can pass iterators around between functions and they do what you want witout much hassle. \_ Traditionally doing pointer comparisons is faster than dereferencing by index. (Good compilers probably will transform the latter for you for simple data structures like arrays, though.) Also, they're simply an abstraction that better describe what you're trying to accomplish (reverse_iterator) or what your needs are (const_iterator). |
2006/8/7-11 [Computer/SW/Compilers] UID:43928 Activity:nil |
8/7 OK all you compiler geniuses looking for a cool job: /usr/local/csua/pub/jobs/Veracode_SrSoftwareEngineer_Compiler \_ Why do you keep posting this? \_ There must be no compiler geniuses reading motd, only sysadmins |
2006/7/11 [Computer/SW/Compilers] UID:43629 Activity:nil |
7/11 Is there a way to turn off specific warnings on the intel 9.0 C++ compilers? The man page says -wd[warning number] should suppress the warning, but that isn't working for me at all. The only think that does is just -w, but that suppresses ALL warnings. \_ grep -v warning-that-I-dont-care ... |
2006/5/17-22 [Computer/SW/Compilers] UID:43081 Activity:nil |
5/16 Hey boys and girls! Do you have time? Do you want to be productive? Do you want to be a better person? Then learn something new today, like gdb, ddd, strace, nc, gprof, and valgrind! With these new skillsets, YOU can make the world a better place! \_ So, um, I've monkeyed with all those tools, except nc, what's nc? \_ Netcat, likely. -John \- you forgot xargs, mapcar and apply \_ these are really basic, "small steps" skillsets like ls, man, less. -op \_ He didn't ask about tools. He asked about nc. -John \_ Knowing gdb is a lot more important than ddd |
2006/3/31-4/1 [Computer/SW/Compilers] UID:42583 Activity:nil |
3/31 Does every C++ compiler give totally worthless error messages, or is it just gcc? \_ Wow, what a great question. No, not even gcc error messages are totally worthless. \_ Aww, give the guy a break -- he's probably just having a really difficult day. \_ We care why? |
2006/1/19-21 [Computer/SW/Compilers] UID:41435 Activity:nil |
1/19 Political talk is boring, let's talk about the Linux kernel and Java compilers! Viva la technology! \_ OK: If I build a reasonably large website using Apache SSIs in every page, will I want to shoot myself later? And if I enable MultiViews, what could go wrong? |
11/27 |