John Carmack » Doom-Ed


John Carmack, creator of Doom and Quake

Filed under: — site admin @ 5:21 am

This blog is dedicated to John Carmack, the creator of Games like Doom and Quake. It consists of his .plan file updates starting from 1997, converted into blog format.


Machinima Music Video

Filed under: — johnc @ 4:30 pm

The machinima music video that Fountainhead Entertainment (my wife’s company)
produced with Quake based tools is available for viewing and voting on at: ("In the waiting line")

I thought they did an excellent job of catering to the strengths of the
medium, and not attempting to make a game engine compete (poorly) as a
general purpose renderer. In watching the video, I did beat myself up a
bit over the visible popping artifacts on the environment mapping, which are
a direct result of the normal vector quantization in the md3 format. While it
isn’t the same issue (normals are full floating point already in Doom), it was
the final factor that pushed me to do the per-pixel environment mapping for
the new cards in the current engine.

The neat thing about the machinima aspect of the video is that they also have
a little game you can play with the same media assets used to create the
video. Not sure when it will be made available publicly.


NV30 vs R300, current developments, etc

Filed under: — johnc @ 4:32 pm

NV30 vs R300, current developments, etc

At the moment, the NV30 is slightly faster on most scenes in Doom than the
R300, but I can still find some scenes where the R300 pulls a little bit
ahead. The issue is complicated because of the different ways the cards can
choose to run the game.

The R300 can run Doom in three different modes: ARB (minimum extensions, no
specular highlights, no vertex programs), R200 (full featured, almost always
single pass interaction rendering), ARB2 (floating point fragment shaders,
minor quality improvements, always single pass).

The NV30 can run DOOM in five different modes: ARB, NV10 (full featured, five
rendering passes, no vertex programs), NV20 (full featured, two or three
rendering passes), NV30 ( full featured, single pass), and ARB2.

The R200 path has a slight speed advantage over the ARB2 path on the R300, but
only by a small margin, so it defaults to using the ARB2 path for the quality
improvements. The NV30 runs the ARB2 path MUCH slower than the NV30 path.
Half the speed at the moment. This is unfortunate, because when you do an
exact, apples-to-apples comparison using exactly the same API, the R300 looks
twice as fast, but when you use the vendor-specific paths, the NV30 wins.

The reason for this is that ATI does everything at high precision all the
time, while Nvidia internally supports three different precisions with
different performances. To make it even more complicated, the exact
precision that ATI uses is in between the floating point precisions offered by
Nvidia, so when Nvidia runs fragment programs, they are at a higher precision
than ATI’s, which is some justification for the slower speed. Nvidia assures
me that there is a lot of room for improving the fragment program performance
with improved driver compiler technology.

The current NV30 cards do have some other disadvantages: They take up two
slots, and when the cooling fan fires up they are VERY LOUD. I’m not usually
one to care about fan noise, but the NV30 does annoy me.

I am using an NV30 in my primary work system now, largely so I can test more
of the rendering paths on one system, and because I feel Nvidia still has
somewhat better driver quality (ATI continues to improve, though). For a
typical consumer, I don’t think the decision is at all clear cut at the

For developers doing forward looking work, there is a different tradeoff –
the NV30 runs fragment programs much slower, but it has a huge maximum
instruction count. I have bumped into program limits on the R300 already.

As always, better cards are coming soon.


Doom has dropped support for vendor-specific vertex programs
(NV_vertex_program and EXT_vertex_shader), in favor of using
ARB_vertex_program for all rendering paths. This has been a pleasant thing to
do, and both ATI and Nvidia supported the move. The standardization process
for ARB_vertex_program was pretty drawn out and arduous, but in the end, it is
a just-plain-better API than either of the vendor specific ones that it
replaced. I fretted for a while over whether I should leave in support for
the older APIs for broader driver compatibility, but the final decision was
that we are going to require a modern driver for the game to run in the
advanced modes. Older drivers can still fall back to either the ARB or NV10

The newly-ratified ARB_vertex_buffer_object extension will probably let me do
the same thing for NV_vertex_array_range and ATI_vertex_array_object.

Reasonable arguments can be made for and against the OpenGL or Direct-X style
of API evolution. With vendor extensions, you get immediate access to new
functionality, but then there is often a period of squabbling about exact
feature support from different vendors before an industry standard settles
down. With central planning, you can have “phasing problems” between
hardware and software releases, and there is a real danger of bad decisions
hampering the entire industry, but enforced commonality does make life easier
for developers. Trying to keep boneheaded-ideas-that-will-haunt-us-for-years
out of Direct-X is the primary reason I have been attending the Windows
Graphics Summit for the past three years, even though I still code for OpenGL.

The most significant functionality in the new crop of cards is the truly
flexible fragment programming, as exposed with ARB_fragment_program. Moving
from the “switches and dials” style of discrete functional graphics
programming to generally flexible programming with indirection and high
precision is what is going to enable the next major step in graphics engines.

It is going to require fairly deep, non-backwards-compatible modifications to
an engine to take real advantage of the new features, but working with
ARB_fragment_program is really a lot of fun, so I have added a few little
tweaks to the current codebase on the ARB2 path:

High dynamic color ranges are supported internally, rather than with
post-blending. This gives a few more bits of color precision in the final
image, but it isn’t something that you really notice.

Per-pixel environment mapping, rather than per-vertex. This fixes a pet-peeve
of mine, which is large panes of environment mapped glass that aren’t
tessellated enough, giving that awful warping-around-the-triangulation effect
as you move past them.

Light and view vectors normalized with math, rather than a cube map. On
future hardware this will likely be a performance improvement due to the
decrease in bandwidth, but current hardware has the computation and bandwidth
balanced such that it is pretty much a wash. What it does (in conjunction
with floating point math) give you is a perfectly smooth specular highlight,
instead of the pixelish blob that we get on older generations of cards.

There are some more things I am playing around with, that will probably remain
in the engine as novelties, but not supported features:

Per-pixel reflection vector calculations for specular, instead of an
interpolated half-angle. The only remaining effect that has any visual
dependency on the underlying geometry is the shape of the specular highlight.
Ideally, you want the same final image for a surface regardless of if it is
two giant triangles, or a mesh of 1024 triangles. This will not be true if
any calculation done at a vertex involves anything other than linear math
operations. The specular half-angle calculation involves normalizations, so
the interpolation across triangles on a surface will be dependent on exactly
where the vertexes are located. The most visible end result of this is that
on large, flat, shiny surfaces where you expect a clean highlight circle
moving across it, you wind up with a highlight that distorts into an L shape
around the triangulation line.

The extra instructions to implement this did have a noticeable performance
hit, and I was a little surprised to see that the highlights not only
stabilized in shape, but also sharpened up quite a bit, changing the scene
more than I expected. This probably isn’t a good tradeoff today for a gamer,
but it is nice for any kind of high-fidelity rendering.

Renormalization of surface normal map samples makes significant quality
improvements in magnified textures, turning tight, blurred corners into shiny,
smooth pockets, but it introduces a huge amount of aliasing on minimized
textures. Blending between the cases is possible with fragment programs, but
the performance overhead does start piling up, and it may require stashing
some information in the normal map alpha channel that varies with mip level.
Doing good filtering of a specularly lit normal map texture is a fairly
interesting problem, with lots of subtle issues.

Bump mapped ambient lighting will give much better looking outdoor and
well-lit scenes. This only became possible with dependent texture reads, and
it requires new designer and tool-chain support to implement well, so it isn’t
easy to test globally with the current Doom datasets, but isolated demos are

The future is in floating point framebuffers. One of the most noticeable
thing this will get you without fundamental algorithm changes is the ability
to use a correct display gamma ramp without destroying the dark color
precision. Unfortunately, using a floating point framebuffer on the current
generation of cards is pretty difficult, because no blending operations are
supported, and the primary thing we need to do is add light contributions
together in the framebuffer. The workaround is to copy the part of the
framebuffer you are going to reference to a texture, and have your fragment
program explicitly add that texture, instead of having the separate blend unit
do it. This is intrusive enough that I probably won’t hack up the current
codebase, instead playing around on a forked version.

Floating point framebuffers and complex fragment shaders will also allow much
better volumetric effects, like volumetric illumination of fogged areas with
shadows and additive/subtractive eddy currents.

John Carmack


More graphics card notes:

Filed under: — johnc @ 9:18 pm

More graphics card notes:

I need to apologize to Matrox – their implementation of hardware displacement
mapping is NOT quad based. I was thinking about a certain other companies
proposed approach. Matrox’s implementation actually looks quite good, so even
if we don’t use it because of the geometry amplification issues, I think it
will serve the noble purpose of killing dead any proposal to implement a quad
based solution.

I got a 3Dlabs P10 card in last week, and yesterday I put it through its
paces. Because my time is fairly over committed, first impressions often
determine how much work I devote to a given card. I didn’t speak to ATI for
months after they gave me a beta 8500 board last year with drivers that
rendered the console incorrectly. :-)

I was duly impressed when the P10 just popped right up with full functional
support for both the fallback ARB_ extension path (without specular
highlights), and the NV10 NVidia register combiners path. I only saw two
issues that were at all incorrect in any of our data, and one of them is
debatable. They don’t support NV_vertex_program_1_1, which I use for the NV20
path, and when I hacked my programs back to 1.0 support for testing, an
issue did show up, but still, this is the best showing from a new board from
any company other than Nvidia.

It is too early to tell what the performance is going to be like, because they
don’t yet support a vertex object extension, so the CPU is hand feeding all
the vertex data to the card at the moment. It was faster than I expected for
those circumstances.

Given the good first impression, I was willing to go ahead and write a new
back end that would let the card do the entire Doom interaction rendering in
a single pass. The most expedient sounding option was to just use the Nvidia
extensions that they implement, NV_vertex_program and NV_register_combiners,
with seven texture units instead of the four available on GF3/GF4. Instead, I
decided to try using the prototype OpenGL 2.0 extensions they provide.

The implementation went very smoothly, but I did run into the limits of their
current prototype compiler before the full feature set could be implemented.
I like it a lot. I am really looking forward to doing research work with this
programming model after the compiler matures a bit. While the shading
languages are the most critical aspects, and can be broken out as extensions
to current OpenGL, there are a lot of other subtle-but-important things that
are addressed in the full OpenGL 2.0 proposal.

I am now committed to supporting an OpenGL 2.0 renderer for Doom through all
the spec evolutions. If anything, I have been somewhat remiss in not pushing
the issues as hard as I could with all the vendors. Now really is the
critical time to start nailing things down, and the decisions may stay with
us for ten years.

A GL2 driver won’t give any theoretical advantage over the current back ends
optimized for cards with 7+ texture capability, but future research work will
almost certainly be moving away from the lower level coding practices, and if
some new vendor pops up (say, Rendition back from the dead) with a next-gen
card, I would strongly urge them to implement GL2 instead of proprietary

I have not done a detailed comparison with Cg. There are a half dozen C-like
graphics languages floating around, and honestly, I don’t think there is a
hell of a lot of usability difference between them at the syntax level. They
are all a whole lot better than the current interfaces we are using, so I hope
syntax quibbles don’t get too religious. It won’t be too long before all real
work is done in one of these, and developers that stick with the lower level
interfaces will be regarded like people that write all-assembly PC
applications today. (I get some amusement from the all-assembly crowd, and it
can be impressive, but it is certainly not effective)

I do need to get up on a soapbox for a long discourse about why the upcoming
high level languages MUST NOT have fixed, queried resource limits if they are
going to reach their full potential. I will go into a lot of detail when I
get a chance, but drivers must have the right and responsibility to multipass
arbitrarily complex inputs to hardware with smaller limits. Get over it.


The Matrox Parhelia Report

Filed under: — johnc @ 3:38 pm

The Matrox Parhelia Report:

The executive summary is that the Parhelia will run Doom, but it is not
performance competitive with Nvidia or ATI.

Driver issue remain, so it is not perfect yet, but I am confident that Matrox
will resolve them.

The performance was really disappointing for the first 256 bit DDR card. I
tried to set up a “poster child” case that would stress the memory subsystem
above and beyond any driver or triangle level inefficiencies, but I was
unable to get it to ever approach the performance of a GF4.

The basic hardware support is good, with fragment flexibility better than GF4
(but not as good as ATI 8500), but it just doesn’t keep up in raw performance.
With a die shrink, this chip could probably be a contender, but there are
probably going to be other chips out by then that will completely eclipse
this generation of products.

None of the special features will be really useful for Doom:

The 10 bit color framebuffer is nice, but Doom needs more than 2 bits of
destination alpha when a card only has four texture units, so we can’t use it.

Anti aliasing features are nice, but it isn’t all that fast in minimum feature
mode, so nobody is going to be turning on AA. The same goes for “surround
gaming". While the framerate wouldn’t be 1/3 the base, it would still
probably be cut in half.

Displacement mapping. Sigh. I am disappointed that the industry is still
pursuing any quad based approaches. Haven’t we learned from the stellar
success of 3DO, Saturn, and NV1 that quads really suck? In any case, we can’t
use any geometry amplification scheme (including ATI’s truform) in conjunction
with stencil shadow volumes.


Shadow Volume

Filed under: — johnc @ 7:36 pm

Mark Kilgard and Cass Everitt at Nvidia have released a paper on shadow volume
rendering with several interesting bits in it. They also include a small
document that I wrote a couple years ago about my discovery process during
the development of some of the early Doom technology.

8:50 pm addendum: Mark Kilgard at Nvidia said that the current drivers already
support the vertex program option to be invarint with the fixed function path,
and that it turned out to be one instruction FASTER, not slower.


Nvidia vs. ATI

Filed under: — johnc @ 3:43 pm

Last month I wrote the Radeon 8500 support for Doom. The bottom line is that
it will be a fine card for the game, but the details are sort of interesting.

I had a pre-production board before Siggraph last year, and we were discussing
the possibility of letting ATI show a Doom demo behind closed doors on it. We
were all very busy at the time, but I took a shot at bringing up support over
a weekend. I hadn’t coded any of the support for the custom ATI extensions
yet, but I ran the game using only standard OpenGL calls (this is not a
supported path, because without bump mapping everything looks horrible) to see
how it would do. It didn’t even draw the console correctly, because they had
driver bugs with texGen. I thought the odds were very long against having all
the new, untested extensions working properly, so I pushed off working on it
until they had revved the drivers a few more times.

My judgment was colored by the experience of bringing up Doom on the original
Radeon card a year earlier, which involved chasing a lot of driver bugs. Note
that ATI was very responsive, working closely with me on it, and we were able
to get everything resolved, but I still had no expectation that things would
work correctly the first time.

Nvidia’s OpenGL drivers are my “gold standard", and it has been quite a while
since I have had to report a problem to them, and even their brand new
extensions work as documented the first time I try them. When I have a
problem on an Nvidia, I assume that it is my fault. With anyone else’s
drivers, I assume it is their fault. This has turned out correct almost all
the time. I have heard more anecdotal reports of instability on some systems
with Nivida drivers recently, but I track stability separately from
correctness, because it can be influenced by so many outside factors.

ATI had been patiently pestering me about support for a few months, so last
month I finally took another stab at it. The standard OpenGL path worked
flawlessly, so I set about taking advantage of all the 8500 specific features.
As expected, I did run into more driver bugs, but ATI got me fixes rapidly,
and we soon had everything working properly. It is interesting to contrast
the Nvidia and ATI functionality:

The vertex program extensions provide almost the same functionality. The ATI
hardware is a little bit more capable, but not in any way that I care about.
The ATI extension interface is massively more painful to use than the text
parsing interface from nvidia. On the plus side, the ATI vertex programs are
invariant with the normal OpenGL vertex processing, which allowed me to reuse
a bunch of code. The Nvidia vertex programs can’t be used in multipass
algorithms with standard OpenGL passes, because they generate tiny differences
in depth values, forcing you to implement EVERYTHING with vertex programs.
Nvidia is planning on making this optional in the future, at a slight speed

I have mixed feelings about the vertex object / vertex array range extensions.
ATI’s extension seems more “right” in that it automatically handles
synchronization by default, and could be implemented as a wire protocol, but
there are advantages to the VAR extension being simply a hint. It is easy to
have a VAR program just fall back to normal virtual memory by not setting the
hint and using malloc, but ATI’s extension requires different function calls
for using vertex objects and normal vertex arrays.

The fragment level processing is clearly way better on the 8500 than on the
Nvidia products, including the latest GF4. You have six individual textures,
but you can access the textures twice, giving up to eleven possible texture
accesses in a single pass, and the dependent texture operation is much more
sensible. This wound up being a perfect fit for Doom, because the standard
path could be implemented with six unique textures, but required one texture
(a normalization cube map) to be accessed twice. The vast majority of Doom
light / surface interaction rendering will be a single pass on the 8500, in
contrast to two or three passes, depending on the number of color components
in a light, for GF3/GF4 (*note GF4 bitching later on).

Initial performance testing was interesting. I set up three extreme cases to
exercise different characteristics:

A test of the non-textured stencil shadow speed showed a GF3 about 20% faster
than the 8500. I believe that Nvidia has a slightly higher performance memory

A test of light interaction speed initially had the 8500 significantly slower
than the GF3, which was shocking due to the difference in pass count. ATI
identified some driver issues, and the speed came around so that the 8500 was
faster in all combinations of texture attributes, in some cases 30+% more.
This was about what I expected, given the large savings in memory traffic by
doing everything in a single pass.

A high polygon count scene that was more representative of real game graphics
under heavy load gave a surprising result. I was expecting ATI to clobber
Nvidia here due to the much lower triangle count and MUCH lower state change
functional overhead from the single pass interaction rendering, but they came
out slower. ATI has identified an issue that is likely causing the unexpected
performance, but it may not be something that can be worked around on current

I can set up scenes and parameters where either card can win, but I think that
current Nvidia cards are still a somewhat safer bet for consistent performance
and quality.

On the topic of current Nvidia cards:

Do not buy a GeForce4-MX for Doom.

Nvidia has really made a mess of the naming conventions here. I always
thought it was bad enough that GF2 was just a speed bumped GF1, while GF3 had
significant architectural improvements over GF2. I expected GF4 to be the
speed bumped GF3, but calling the NV17 GF4-MX really sucks.

GF4-MX will still run Doom properly, but it will be using the NV10 codepath
with only two texture units and no vertex shaders. A GF3 or 8500 will be
much better performers. The GF4-MX may still be the card of choice for many
people depending on pricing, especially considering that many games won’t use
four textures and vertex programs, but damn, I wish they had named it
something else.

As usual, there will be better cards available from both Nvidia and ATI by the
time we ship the game.

8:50 pm addendum: Mark Kilgard at Nvidia said that the current drivers already
support the vertex program option to be invarint with the fixed function path,
and that it turned out to be one instruction FASTER, not slower.


Quake 2 Source Code

Filed under: — johnc @ 7:07 pm

The Quake 2 source code is now available for download, licensed under the GPL.

As with previous source code releases, the game data remains under the
original copyright and license, and cannot be freely distributed. If you
create a true total conversion, you can give (or sell) a complete package
away, as long as you abide by the GPL source code license. If your projects
use the original Quake 2 media, the media must come from a normal, purchased
copy of the game.

I’m sure I will catch some flack about increased cheating after the source
release, but there are plenty of Q2 cheats already out there, so you are
already in the position of having to trust the other players to a degree. The
problem is really only solvable by relying on the community to police itself,
because it is a fundamentally unwinnable technical battle to make a completely
cheat proof game of this type. Play with your friends.


Driver Optimization

Filed under: — johnc @ 11:22 pm

Driver optimizations have been discussed a lot lately because of the quake3
name checking in ATI’s recent drivers, so I am going to lay out my
position on the subject.

There are many driver optimizations that are pure improvements in all cases,
with no negative effects. The difficult decisions come up when it comes to
“trades” of various kinds, where a change will give an increase in
performance, but at a cost.

Relative performance trades. Part of being a driver writer is being able to
say “I don’t care if stippled, anti-aliased points with texturing go slow",
and optimizing accordingly. Some hardware features, like caches and
hierarchical buffers, may be advantages on some apps, and disadvantages on
others. Command buffer sizes often tune differently for different

Quality trades. There is a small amount of wiggle room in the specs for pixel
level variability, and some performance gains can be had by leaning towards
the minimums. Most quality trades would actually be conformance trades,
because the results are not exactly conformant, but they still do “roughly”
the right thing from a visual standpoint. Compressing textures automatically,
avoiding blending of very faint transparent pixels, using a 16 bit depth
buffer, etc. A good application will allow the user to make most of these
choices directly, but there is good call for having driver preference panels
to enable these types of changes on naive applications. Many drivers now
allow you to quality trade in an opposite manner – slowing application
performance by turning on anti-aliasing or anisotropic texture filtering.

Conformance trades. Most conformance trades that happen with drivers are
unintentional, where the slower, more general fallback case just didn’t get
called when it was supposed to, because the driver didn’t check for a certain
combination to exit some specially optimized path. However, there are
optimizations that can give performance improvements in ways that make it
impossible to remain conformant. For example, a driver could choose to skip
storing of a color value before it is passed on to the hardware, which would
save a few cycles, but make it impossible to correctly answer
glGetFloatv( GL_CURRENT_COLOR, buffer ).

Normally, driver writers will just pick their priorities and make the trades,
but sometimes there will be a desire to make different trades in different
circumstances, so as to get the best of both worlds.

Explicit application hints are a nice way to offer different performance
characteristics, but that requires cooperation from the application, so it
doesn’t help in an ongoing benchmark battle. OpenGL’s glHint() call is the
right thought, but not really set up as flexibly as you would like. Explicit
extensions are probably the right way to expose performance trades, but it
isn’t clear to me that any conformant trade will be a big enough difference
to add code for.

End-user selectable optimizations. Put a selection option in the driver
properties window to allow the user to choose which application class they
would like to be favored in some way. This has been done many times, and is a
reasonable way to do things. Most users would never touch the setting, so
some applications may be slightly faster or slower than in their “optimal
benchmark mode".

Attempt to guess the application from app names, window strings, etc. Drivers
are sometimes forced to do this to work around bugs in established software,
and occasionally they will try to use this as a cue for certain optimizations.

My positions:

Making any automatic optimization based on a benchmark name is wrong. It
subverts the purpose of benchmarking, which is to gauge how a similar class of
applications will perform on a tested configuration, not just how the single
application chosen as representative performs.

It is never acceptable to have the driver automatically make a conformance
tradeoff, even if they are positive that it won’t make any difference. The
reason is that applications evolve, and there is no guarantee that a future
release won’t have different assumptions, causing the upgrade to misbehave.
We have seen this in practice with Quake3 and derivatives, where vendors
assumed something about what may or may not be enabled during a compiled
vertex array call. Most of these are just mistakes, or, occasionally,

Allowing a driver to present a non-conformant option for the user to select is
an interesting question. I know that as a developer, I would get hate mail
from users when a point release breaks on their whiz-bang optimized driver,
just like I do with overclocked CPUs, and I would get the same “but it works
with everything else!” response when I tell them to put it back to normal. On
the other hand, being able to tweak around with that sort of think is fun for
technically inclined users. I lean towards frowning on it, because it is a
slippery slope from there down in to “cheating drivers” of the see-through-
walls variety.

Quality trades are here to stay, with anti-aliasing, anisotropic texture
filtering, and other options being positive trades that a user can make, and
allowing various texture memory optimizations can be a very nice thing for a
user trying to get some games to work well. However, it is still important
that it start from a completely conformant state by default. This is one area
where application naming can be used reasonably by the driver, to maintain
user selected per-application modifiers.

I’m not fanatical on any of this, because the overriding purpose of software
is to be useful, rather than correct, but the days of game-specific mini-
drivers that can just barely cut it are past, and we should demand more from
the remaining vendors.

Also, excessive optimization is the cause of quite a bit of ill user
experience with computers. Byzantine code paths extract costs as long as they
exist, not just as they are written.


Doom 3 on a GeForce 3

Filed under: — johnc @ 9:02 pm

I just got back from Tokyo, where I demonstrated our new engine
running under MacOS-X with a GeForce 3 card. We had quite a bit of
discussion about whether we should be showing anything at all,
considering how far away we are from having a title on the shelves, so
we probably aren’t going to be showing it anywhere else for quite
a while.

We do run a bit better on a high end wintel system, but the Apple
performance is still quite good, especially considering the short amount
of time that the drivers had before the event.

It is still our intention to have a simultaneous release of the next
product on Windows, MacOS-X, and Linux.

Here is a dump on the GeForce 3 that I have been seriously working
with for a few weeks now:

The short answer is that the GeForce 3 is fantastic. I haven’t had such an
impression of raising the performance bar since the Voodoo 2 came out, and
there are a ton of new features for programmers to play with.

Graphics programmers should run out and get one at the earliest possible
time. For consumers, it will be a tougher call. There aren’t any
applications our right now that take proper advantage of it, but you should
still be quite a bit faster at everything than GF2, especially with
anti-aliasing. Balance that against whatever the price turns out to be.

While the Radeon is a good effort in many ways, it has enough shortfalls
that I still generally call the GeForce 2 ultra the best card you can buy
right now, so Nvidia is basically dethroning their own product.

It is somewhat unfortunate that it is labeled GeForce 3, because GeForce
2 was just a speed bump of GeForce, while GF3 is a major architectural
change. I wish they had called the GF2 something else.

The things that are good about it:

Lots of values have additional internal precision, like texture coordinates
and rasterization coordinates. There are only a few places where this
matters, but it is nice to be cleaning up. Rasterization precision is about
the last thing that the multi-thousand dollar workstation boards still do
any better than the consumer cards.

Adding more texture units and more register combiners is an obvious
evolutionary step.

An interesting technical aside: when I first changed something I was
doing with five single or dual texture passes on a GF to something that
only took two quad texture passes on a GF3, I got a surprisingly modest
speedup. It turned out that the texture filtering and bandwidth was the
dominant factor, not the frame buffer traffic that was saved with more
texture units. When I turned off anisotropic filtering and used
compressed textures, the GF3 version became twice as fast.

The 8x anisotropic filtering looks really nice, but it has a 30%+ speed
cost. For existing games where you have speed to burn, it is probably a
nice thing to force on, but it is a bit much for me to enable on the current
project. Radeon supports 16x aniso at a smaller speed cost, but not in
conjunction with trilinear, and something is broken in the chip that
makes the filtering jump around with triangular rasterization

The depth buffer optimizations are similar to what the Radeon provides,
giving almost everything some measure of speedup, and larger ones
available in some cases with some redesign.

3D textures are implemented with the full, complete generality. Radeon
offers 3D textures, but without mip mapping and in a non-orthogonal
manner (taking up two texture units).

Vertex programs are probably the most radical new feature, and, unlike
most “radical new features", actually turn out to be pretty damn good.
The instruction language is clear and obvious, with wonderful features
like free arbitrary swizzle and negate on each operand, and the obvious
things you want for graphics like dot product instructions.

The vertex program instructions are what SSE should have been.

A complex setup for a four-texture rendering pass is way easier to
understand with a vertex program than with a ton of texgen/texture
matrix calls, and it lets you do things that you just couldn’t do hardware
accelerated at all before. Changing the model from fixed function data
like normals, colors, and texcoords to generalized attributes is very
important for future progress.

Here, I think Microsoft and DX8 are providing a very good benefit by
forcing a single vertex program interface down all the hardware
vendor’s throats.

This one is truly stunning: the drivers just worked for all the new
features that I tried. I have tested a lot of pre-production 3D cards, and it
has never been this smooth.

The things that are indifferent:

I’m still not a big believer in hardware accelerated curve tessellation.
I’m not going to go over all the reasons again, but I would have rather
seen the features left off and ended up with a cheaper part.

The shadow map support is good to get in, but I am still unconvinced
that a fully general engine can be produced with acceptable quality using
shadow maps for point lights. I spent a while working with shadow
buffers last year, and I couldn’t get satisfactory results. I will revisit
that work now that I have GeForce 3 cards, and directly compare it with my
current approach.

At high triangle rates, the index bandwidth can get to be a significant
thing. Other cards that allow static index buffers as well as static vertex
buffers will have situations where they provide higher application speed.
Still, we do get great throughput on the GF3 using vertex array range
and glDrawElements.

The things that are bad about it:

Vertex programs aren’t invariant with the fixed function geometry paths.
That means that you can’t mix vertex program passes with normal
passes in a multipass algorithm. This is annoying, and shouldn’t have

Now we come to the pixel shaders, where I have the most serious issues.
I can just ignore this most of the time, but the way the pixel shader
functionality turned out is painfully limited, and not what it should have

DX8 tries to pretend that pixel shaders live on hardware that is a lot
more general than the reality.

Nvidia’s OpenGL extensions expose things much more the way they
actually are: the existing register combiners functionality extended to
eight stages with a couple tweaks, and the texture lookup engine is
configurable to interact between textures in a list of specific ways.

I’m sure it started out as a better design, but it apparently got cut and cut
until it really looks like the old BumpEnvMap feature writ large: it does
a few specific special effects that were deemed important, at the expense
of a properly general solution.

Yes, it does full bumpy cubic environment mapping, but you still can’t
just do some math ops and look the result up in a texture. I was
disappointed on this count with the Radeon as well, which was just
slightly too hardwired to the DX BumpEnvMap capabilities to allow
more general dependent texture use.

Enshrining the capabilities of this mess in DX8 sucks. Other companies
had potentially better approaches, but they are now forced to dumb them
down to the level of the GF3 for the sake of compatibility. Hopefully
we can still see some of the extra flexibility in OpenGL extensions.

The future:

I think things are going to really clean up in the next couple years. All
of my advocacy is focused on making sure that there will be a
completely clean and flexible interface for me to target in the engine
after DOOM, and I think it is going to happen.

The market may have shrunk to just ATI and Nvidia as significant
players. Matrox, 3D labs, or one of the dormant companies may surprise
us all, but the pace is pretty frantic.

I think I would be a little more comfortable if there was a third major
player competing, but I can’t fault Nvidia’s path to success.


The Birth of Doom 3

Filed under: — johnc @ 2:51 am

Well, this is going to be an interesting .plan update.

Most of this is not really public business, but if some things aren’t stated
explicitly, it will reflect unfairly on someone.

As many people have heard discussed, there was quite a desire to remake DOOM
as our next project after Q3. Discussing it brought an almost palpable thrill
to most of the employees, but Adrian had a strong enough dislike for the idea
that it was shot down over and over again.

Design work on an alternate game has been going on in parallel with the
mission pack development and my research work.

Several factors, including a general lack of enthusiasm for the proposed plan,
the warmth that Wolfenstien was met with at E3, and excitement about what
we can do with the latest rendering technology were making it seem more and
more like we weren’t going down the right path.

I discussed it with some of the other guys, and we decided that it was
important enough to drag the company through an unpleasant fight over it.

An ultimatum was issued to Kevin and Adrian(who control >50% of the company):
We are working on DOOM for the next project unless you fire us.

Obviously no fun for anyone involved, but the project direction was changed,
new hires have been expedited, and the design work has begun.

It wasn’t planned to announce this soon, but here it is: We are working on a
new DOOM game, focusing on the single player game experience, and using brand
new technology in almost every aspect of it. That is all we are prepared to
say about the game for quite some time, so don’t push for interviews. We
will talk about it when things are actually built, to avoid giving
misleading comments.

It went smoother than expected, but the other shoe dropped yesterday.

Kevin and Adrian fired Paul Steed in retaliation, over my opposition.

Paul has certainly done things in the past that could be grounds for
dismissal, but this was retaliatory for him being among the “conspirators".

I happen to think Paul was damn good at his job, and that he was going to be
one of the most valuable contributors to DOOM.

We need to hire two new modeler/animator/cinematic director types. If you
have a significant commercial track record in all three areas, and consider
yourself at the top of your field, send your resume to Kevin Cloud.


Comments on the Latest Cards

Filed under: — johnc @ 4:10 pm

I have gotten a lot of requests for comments on the latest crop of video
cards, so here is my initial technical evaluation. We have played with
some early versions, but this is a paper evaluation. I am not in a position
to judge 2D GDI issues or TV/DVD issues, so this is just 3D commentary.

Marketing silliness: saying “seven operations on a pixel” for a dual texture
chip. Yes, I like NV_register_combiners a lot, but come on…

The DDR GeForce is the reining champ of 3D cards. Of the shipping boards, it
is basically better than everyone at every aspect of 3D graphics, and
pioneered some features that are going to be very important: signed pixel
math, dot product blending, and cubic environment maps.

The GeForce2 is just a speed bumped GeForce with a few tweaks, but that’s not
a bad thing. Nvidia will have far and away the tightest drivers for quite
some time, and that often means more than a lot of new features in the real

The nvidia register combiners are highly programmable, and can often save a
rendering pass or allow a somewhat higher quality calculation, but on the
whole, I would take ATI’s third texture for flexibility.

Nvidia will probably continue to hit the best framerates in benchmarks at low
resolution, because they have flexible hardware with geometry acceleration
and well-tuned drivers.

GeForce is my baseline for current rendering work, so I can wholeheartedly
recommend it.

Marketing silliness: “charisma engine” and “pixel tapestry” are silly names
for vertex and pixel processing that are straightforward improvements over
existing methods. Sony is probably to blame for starting that.

The Radeon has the best feature set available, with several advantages over

A third texture unit per pixel
Three dimensional textures
Dependent texture reads (bump env map)
Greater internal color precision.
User clip planes orthogonal to all rasterization modes.
More powerful vertex blending operations.
The shadow id map support may be useful, but my work with shadow buffers have
shown them to have significant limitations for global use in a game.

On paper, it is better than GeForce in almost every way except that it is
limited to a maximum of two pixels per clock while GeForce can do four. This
comes into play when the pixels don’t do as much memory access, for example
when just drawing shadow planes to the depth/stencil buffer, or when drawing
in roughly front to back order and many of the later pixels depth fail,
avoiding the color buffer writes.

Depending on the application and algorithm, this can be anywhere from
basically no benefit when doing 32 bit blended multi-pass, dual texture
rendering to nearly double the performance for 16 bit rendering with
compressed textures. In any case, a similarly clocked GeForce(2) should
somewhat outperform a Radeon on today’s games when fill rate limited. Future
games that do a significant number of rendering passes on the entire world
may go back in ATI’s favor if they can use the third texture unit, but I doubt
it will be all that common.

The real issue is how quickly ATI can deliver fully clocked production boards,
bring up stable drivers, and wring all the performance out of the hardware.
This is a very different beast than the Rage128. I would definitely recommend
waiting on some consumer reviews to check for teething problems before
upgrading to a Radeon, but if things go well, ATI may give nvidia a serious
run for their money this year.

Marketing silliness: Implying that a voodoo 5 is of a different class than a
voodoo 4 isn’t right. Voodoo 4 max / ultra / SLI / dual / quad or something
would have been more forthright.

Rasterization feature wise, voodoo4 is just catching up to the original TNT.
We finally have 32 bit color and stencil. Yeah.

There aren’t any geometry features.

The T buffer is really nothing more than an accumulation buffer that is
averaged together during video scanout. This same combining of separate
buffers can be done by any modern graphics card if they are set up for it
(although they will lose two bits of color precision in the process). At
around 60 fps there is a slight performance win by doing it at video scannout
time, but at 30 fps it is actually less memory traffic to do it explicitly.
Video scan tricks also usually don’t work in windowed modes.

The real unique feature of the voodoo5 is subpixel jittering during
rasterization, which can’t reasonably be emulated by other hardware. This
does indeed improve the quality of anti-aliasing, although I think 3dfx might
be pushing it a bit by saying their 4 sample jittering is as good as 16
sample unjittered.

The saving grace of the voodoo5 is the scalability. Because it only uses SDR
ram, a dual chip Voodoo5 isn’t all that much faster than some other single
chip cards, but the quad chip card has over twice the pixel fill rate of the
nearest competitor. That is a huge increment. Voodoo5 6000 should win every
benchmark that becomes fill rate limited.

I haven’t been able to honestly recommend a voodoo3 to people for a long
time, unless they had a favorite glide game or wanted early linux Xfree 4.0
3D support. Now (well, soon), a Voodoo5 6000 should make all of today’s
games look better than any other card. You can get over twice as many pixel
samples, and have them jittered and blended together for anti-aliasing.

It won’t be able to hit Q3 frame rates as high as GeForce, but if you have a
high end processor there really may not be all that much difference for you
between 100fps and 80fps unless you are playing hardcore competitive and
can’t stand the occasional drop below 60fps.

There are two drawbacks: it’s expensive, and it won’t take advantage of the
new rasterization features coming in future games. It probably wouldn’t be
wise to buy a voodoo5 if you plan on keeping it for two years.


Mojave Desert

Filed under: — johnc @ 2:13 am

I stayed a couple days after E3 to attend the SORAC amateur rocket launch.
I have provided some sponsorship to two of the teams competing for the CATS
(Cheap Access to Space) rocketry prize, and it was a nice opportunity to get
out and meet some of the people.

It is interesting how similar the activity is around an experimental rocket
launch, going to a race track with an experimental car, and putting out a
beta version of new software is. Lots of “twenty more minutes!", and lots
of well-wishers waiting around while the people on the critical path sweat
over what they are doing.

Mere minutes before we absolutely, positively needed to leave to catch our
plane flight, they started the countdown. The rocket launched impressively,
but broke apart at a relatively low altitude. Ouch. It was a hybrid, so
there wasn’t really an explosion, but watching the debris rain down wasn’t
very heartening. Times like that, I definitely appreciate working in
software. “Run it again, with a breakpoint!”

Note to self: pasty-skinned programmers ought not stand out in the Mojave
desert for multiple hours.


Quake 1 Utilities

Filed under: — johnc @ 1:02 pm

And the Q1 utilities are now also available under the GPL in


QC Files

Filed under: — johnc @ 7:41 pm

The .qc files for quake1/quakeworld are now available under the GPL
in source/qw-qc.tar.gx on out ftp site. This was an oversight on my
part in the original release.

Thanks to the QuakeForge team for doing the grunt work of the preparation.


More Bits per Color

Filed under: — johnc @ 12:23 am

We need more bits per color component in our 3D accelerators.

I have been pushing for a couple more bits of range for several years now,
but I now extend that to wanting full 16 bit floating point colors throughout
the graphics pipeline. A sign bit, ten bits of mantissa, and five bits of
exponent (possibly trading a bit or two between the mantissa and exponent).
Even that isn’t all you could want, but it is the rational step.

It is turning out that I need a destination alpha channel for a lot of the
new rendering algorithms, so intermediate solutions like 10/12/10 RGB
formats aren’t a good idea. Higher internal precision with dithering to 32
bit pixels would have some benefit, but dithered intermediate results can
easily start piling up the errors when passed over many times, as we have
seen with 5/6/5 rendering.

Eight bits of precision isn’t enough even for full range static image
display. Images with a wide range usually come out fine, but restricted
range images can easily show banding on a 24-bit display. Digital television
specifies 10 bits of precision, and many printing operations are performed
with 12 bits of precision.

The situation becomes much worse when you consider the losses after multiple
operations. As a trivial case, consider having multiple lights on a wall,
with their contribution to a pixel determined by a texture lookup. A single
light will fall off towards 0 some distance away, and if it covers a large
area, it will have visible bands as the light adds one unit, two units, etc.
Each additional light from the same relative distance stacks its contribution
on top of the earlier ones, which magnifies the amount of the step between
bands: instead of going 0,1,2, it goes 0,2,4, etc. Pile a few lights up like
this and look towards the dimmer area of the falloff, and you can believe you
are back in 256-color land.

There are other more subtle issues, like the loss of potential result values
from repeated squarings of input values, and clamping issues when you sum up
multiple incident lights before modulating down by a material.

Range is even more clear cut. There are some values that have intrinsic
ranges of 0.0 to 1.0, like factors of reflection and filtering. Normalized
vectors have a range of -1.0 to 1.0. However, the most central quantity in
rendering, light, is completely unbounded. We want a LOT more than a 0.0 to
1.0 range. Q3 hacks the gamma tables to sacrifice a bit of precision to get
a 0.0 to 2.0 range, but I wanted more than that for even primitive rendering
techniques. To accurately model the full human sensable range of light
values, you would need more than even a five bit exponent.

This wasn’t much of an issue even a year ago, when we were happy to just
cover the screen a couple times at a high framerate, but realtime graphics
is moving away from just “putting up wallpaper” to calculating complex
illumination equations at each pixel. It is not at all unreasonable to
consider having twenty textures contribute to the final value of a pixel.
Range and precision matter.

A few common responses to this pitch:

“64 bits per pixel??? Are you crazy???” Remember, it is exactly the same
relative step as we made from 16 bit to 32 bit, which didn’t take all
that long.

Yes, it will be slower. That’s ok. This is an important point: we can’t
continue to usefully use vastly greater fill rate without an increase in
precision. You can always crank the resolution and multisample anti-alaising
up higher, but that starts to have diminishing returns well before you use of
the couple gigatexels of fill rate we are expected to have next year. The
cool and interesting things to do with all that fill rate involves many
passes composited into less pixels, making precision important.

“Can we just put it in the texture combiners and leave the framebuffer at 32
bits?” No. There are always going to be shade trees that overflow a given
number of texture units, and they are going to be the ones that need the
extra precision. Scales and biases between the framebuffer and the higher
precision internal calculations can get you some mileage (assuming you can
bring the blend color into your combiners, which current cards can’t), but
its still not what you want. There are also passes which fundamentally
aren’t part of a single surface, but still combine to the same pixels, as
with all forms of translucency, and many atmospheric effects.

“Do we need it in textures as well?” Not for most image textures, but it
still needs to be supported for textures that are used as function look
up tables.

“Do we need it in the front buffer?” Probably not. Going to a 64 bit front
buffer would probably play hell with all sorts of other parts of the system.
It is probably reasonable to stay with 32 bit front buffers with a blit from
the 64 bit back buffer performing a lookup or scale and bias operation before
dithering down to 32 bit. Dynamic light adaptation can also be done during
this copy. Dithering can work quite well as long as you are only performing
a single pass.

I used to be pitching this in an abstract “you probably should be doing this”
form, but two significant things have happened that have moved this up my hit
list to something that I am fairly positive about.

Mark Peercy of SGI has shown, quite surprisingly, that all Renderman surface
shaders can be decomposed into multi-pass graphics operations if two
extensions are provided over basic OpenGL: the existing pixel texture
extension, which allows dependent texture lookups (matrox already supports a
form of this, and most vendors will over the next year), and signed, floating
point colors through the graphics pipeline. It also makes heavy use of the
existing, but rarely optimized, copyTexSubImage2D functionality for

This is a truly striking result. In retrospect, it seems obvious that with
adds, multiplies, table lookups, and stencil tests that you can perform any
computation, but most people were working under the assumption that there
were fundamentally different limitations for “realtime” renderers vs offline
renderers. It may take hundreds or thousands of passes, but it clearly
defines an approach with no fundamental limits. This is very important.
I am looking forward to his Siggraph paper this year.

Once I set down and started writing new renderers targeted at GeForce level
performance, the precision issue has started to bite me personally. There
are quite a few times where I have gotten visible banding after a set of
passes, or have had to worry about ordering operations to avoid clamping.
There is nothing like actually dealing with problems that were mostly
theoretical before…

64 bit pixels. It is The Right Thing to do. Hardware vendors: don’t you be
the company that is the last to make the transition.


A Trip Down The Graphics Pipeline

Filed under: — johnc @ 6:43 am

Whenever I start a new graphics engine, I always spend a fair amount of time
flipping back through older graphics books. It is always interesting to see
how your changed perspective with new experience impacts your appreciation of
a given article.

I was skimming through Jim Blinn’s “A Trip Down The Graphics Pipeline”
tonight, and I wound up laughing out loud twice.

From the book:

P73: I then empirically found that I had to scale by -1 in x instead of in z,
and also to scale the xa and xf values by -1. (Basically I just put in enough
minus signs after the fact to make it work.) Al Barr refers to this technique
as “making sure you have made an even number of sign errors.”

P131: The only lines that generate w=0 after clipping are those that pass
through the z axis, the valley of the trough. These lines are lines that
pass exactly through the eyepoint. After which you are dead and don’t care
about divide-by-zero errors.

If you laughed, you are a graphics geek.

My first recollection of a Jim Blinn article many years ago was my skimming
over it and thinking “My god, what ridiculously picky minutia.” Over the last
couple years, I found myself haranguing people over some fairly picky issues,
like the LSB errors with cpu vs rasterizer face culling and screen edge
clipping with guard band bit tests. After one of those pitches, I quite
distinctly thought to myself “My god, I’m turning into Jim Blinn!” :-)


Startlight Foundation

Filed under: — johnc @ 3:39 am

Two years ago, Id was contacted by the Startlight Foundation, an organization
that tries to grant wishes to seriously ill kids. (

There was a young man with Hodgkin’s Lymphoma that, instead of wanting to go
to Disneyland or other traditional wishes, wanted to visit Id and talk with
me about programming.

It turned out that Seumas McNally was already an accomplished developer.
His family company, Longbow Digital Arts (, had
been doing quite respectably selling small games directly over the internet.
It bore a strong resemblance to the early shareware days of Apogee and Id.

We spent the evening talking about graphics programmer things – the relative
merits of voxels and triangles, procedurally generated media, level of detail
management, API and platforms.

We talked at length about the balance between technology and design, and all
the pitfalls that lie in the way of shipping a modern product.

We also took a dash out in my ferrari, thinking “this is going to be the best
excuse a cop will ever hear if we get pulled over".

Longbow continued to be successful, and eventually the entire family was
working full time on “Treadmarks", their new 3D tank game.

Over email about finishing the technology in Treadmarks, Seumas once said
“I hope I can make it". Not “be a huge success” or “beat the competition".
Just “make it".

That is a yardstick to measure oneself by.

It is all too easy to lose your focus or give up with just the ordinary
distractions and disappointments that life brings. This wasn’t ordinary.
Seumas had cancer. Whatever problems you may be dealing with in your life,
they pale before having problems drawing your next breath.

He made it.

Treadmarks started shipping a couple months ago, and was entered in the
Independent Games Festival at the Game Developer’s Conference this last month.
It came away with the awards for technical excellence, game design, and the
grand prize.

I went out to dinner with the McNally family the next day, and had the
opportunity to introduce Anna to them. One of the projects at Anna’s new
company, Fountainhead Entertainment (, is a
documentary covering gaming, and she had been looking forward to meeting
Seumas after hearing me tell his story a few times. The McNallys invited
her to bring a film crew up to Canada and talk with everyone whenever she

Seumas died the next week.

I am proud to have been considered an influence in Seumas’ work, and I think
his story should be a good example for others. Through talent and
determination, he took something he loved and made a success out of it in
many dimensions. for more information.


Virtualized Video Card Local Memory is The Right Thing

Filed under: — johnc @ 11:47 pm

This is something I have been preaching for a couple years, but I
finally got around to setting all the issues down in writing.

First, the statement:

Virtualized video card local memory is The Right Thing.

Now, the argument (and a whole bunch of tertiary information):

If you had all the texture density in the world, how much texture
memory would be needed on each frame?

For directly viewed textures, mip mapping keeps the amount of
referenced texels between one and one quarter of the drawn pixels.
When anisotropic viewing angles and upper level clamping are taken into
account, the number gets smaller. Take 1/3 as a conservative estimate.

Given a fairly aggressive six texture passes over the entire screen,
that equates to needing twice as many texels as pixels. At 1024x768
resolution, well under two million texels will be referenced, no matter
what the finest level of detail is. This is the worst case, assuming
completely unique texturing with no repeating. More commonly, less
than one million texels are actually needed.

As anyone who has tried to run certain Quake 3 levels in high quality
texture mode on an eight or sixteen meg card knows, it doesn�t work out
that way in practice. There is a fixable part and some more
fundamental parts to the fall-over-dead-with-too-many-textures problem.

The fixable part is that almost all drivers perform pure LRU (least
recently used) memory management. This works correctly as long as the
total amount of textures needed for a given frame fits in the card�s
memory after they have been loaded. As soon as you need a tiny bit
more memory than fits on the card, you fall off of a performance cliff.
If you need 14 megs of textures to render a frame, and your graphics
card has 12 megs available after its frame buffers, you wind up loading
14 megs of texture data over the bus every frame, instead of just the 2
megs that don�t fit. Having the cpu generate 14 megs of command
traffic can drop you way into the single digit frame rates on most

If an application makes reasonable effort to group rendering by
texture, and there is some degree of coherence in the order of texture
references between frames, much better performance can be gotten with a
swapping algorithm that changes its behavior instead of going into a
full thrash:

While ( memory allocation for new texture fails )
Find the least recently used texture.
If the LRU texture was not needed in the previous frame,
Free it
Free the most recently used texture that isn�t bound to an
active texture unit

Freeing the MRU texture seems counterintuitive, but what it does is
cause the driver to use the last bit of memory as a sort of scratchpad
that gets constantly overwritten when there isn�t enough space. Pure
LRU plows over all the other textures that are very likely going to be
needed at the beginning of the next frame, which will then plow over
all the textures that were loaded on top of them.

If an application uses textures in a completely random order, any given
replacement policy has the some effect�

Texture priority for swapping is a non-feature. There is NO benefit to
attempting to statically prioritize textures for swapping. Either a
texture is going to be referenced in the next frame, or it isn�t.
There aren�t any useful gradations in between. The only hint that
would be useful would be a notice that a given texture is not going to
be in the next frame, and that just doesn�t come up very often or cover
very many texels.

With the MRU-on-thrash texture swapping policy, things degrade
gracefully as the total amount of textures increase but due to several
issues, the total amount of textures calculated and swapped is far
larger than the actual amount of texels referenced to draw pixels.

The primary problem is that textures are loaded as a complete unit,
from the smallest mip map level all the way up to potentially a 2048 by
2048 top level image. Even if you are only seeing 16 pixels of it off
in the distance, the entire 12 meg stack might need to be loaded.

Packing can also cause some amount of wasted texture memory. When you
want to load a two meg texture, it is likely going to require a lot
more than just two megs of free texture memory, because a lot of it is
going to be scattered around in 8k to 64k blocks. At the pathological
limit, this can waste half your texture memory, but more reasonably it
is only going to be 10% or so, and cause a few extra texture swap outs.

On a frame at a time basis, there are often significant amounts of
texels even in referenced mip levels that are not seen. The back sides
of characters, and large textures on floors can often have less than
50% of their texels used during a frame. This is only an issue as they
are being swapped in, because they will very likely be needed within
the next few frames. The result is one big hitch instead of a steady

There are schemes that can help with these problems, but they have

Packing losses can be addressed with compaction, but that has rarely
proven to be worthwhile in the history of memory management. A 128-bit
graphics accelerator could compact and sort 10 megs of texture memory
in about 10 msec if desired.

The problems with large textures can be solved by just not using large
textures. Both packing losses, and non- referenced texels can be
reduced by chopping everything up into 64x64 or 128x128 textures. This
requires preprocessing, adds geometry, and requires messy overlap of
the textures to avoid seaming problems.

It is possible to estimate which mip levels will actually be needed and
only swap those in. An application can�t calculate exactly the mip
map levels that will be referenced by the hardware, because there are
slight variations between chips and the slope calculation would add
significant processing overhead. A conservative upper bound can be
taken by looking at the minimum normal distance of any vertex
referencing a given texture in a frame. This will overestimate the
required textures by 2x or so and still leave a big hit when the top
mip level loads for big textures, but it can allow giant cathedral
style scenes to render without swapping.

Clever programmers can always work harder to overcome obstacles, but in
this case, there is a clear hardware solution that gives better
performance than anything possible with software and just makes
everyone�s lives easier: virtualize the card�s view of its local

With page tables, address fragmentation isn�t an issue, and with the
graphics rasterizer only causing a page load when something from that
exact 4k block is needed, the mip level problems and hidden texture
problems just go away. Nothing sneaky has to be done by the
application or driver, you just manage page indexes.

The hardware requirements are not very heavy. You need translation
lookaside buffers (TLB) on the graphics chip, the ability to
automatically load the TLB from a page table set up in local memory,
and the ability to move a page from AGP or PCI into graphics memory and
update the page tables and reference counts. You don�t even need that
many TLB, because graphics access patterns don�t hop all over the place
like CPU access can. Even with only a single TLB for each texture
bilerp unit, reloads would only account for about 1/32 of the memory
access if the textures were 4k blocked. All you would really want at
the upper limit would be enough TLB for each texture unit to cover the
texels referenced on a typical rasterization scan line.

Some programmers will say �I don�t want the system to manage the
textures, I want full control!� There are a couple responses to that.
First, a page level management scheme has flexibility that you just
can�t get with a software only scheme, so it is a set of brand new
capabilities. Second, you can still just choose to treat it as a fixed
size texture buffer and manage everything yourself with updates.
Third, even if it WAS slower than the craftiest possible software
scheme (and I seriously doubt it), so much of development is about
willingly trading theoretical efficiency for quicker, more robust
development. We don�t code overlays in assembly language any more�

Some hardware designers will say something along the lines of �But
the graphics engine goes idle when you are pulling the page over from
AGP!� Sure, you are always better off to just have enough texture
memory and never swap, and this feature wouldn�t let you claim any more
megapixels or megatris, but every card winds up not having enough
memory at some point. Ignoring those real world cases isn�t helping
your customers. In any case, it goes idle a hell of a lot less than if
you were loading the entire texture over the command fifo.

3Dlabs is supposed to have some form of virtual memory management in
the permedia 3, but I am not familiar with the details (if anyone from
3dlabs wants to send me the latest register specs, I would appreciate

A mouse controlled first person shooter is fairly unique in how quickly
it can change the texture composition of a scene. A 180-degree snap
turn can conceivably bring in a completely different set of textures on
a subsequent frame. Almost all other graphics applications bring
textures in at a much steadier pace.

So, given that 180-degree snap turn to a completely different and
uniquely textured scene, what would be the worst case performance? An
AGP 2x bus is theoretically supposed to have over 500 mb/sec of
bandwidth. It doesn�t get that high in practice, but linear 4k block
reads would give it the best possible conditions, and even at 300
mb/sec, reloading the entire texture working set would only take 10

Rendering is not likely to be buffered sufficiently to overlap
appreciably with page loading, and the command transport for a complex
scene will take significant time by itself, so it shows that a worst
case scene will often not be able to be rendered in 1/60th of a second.

This is roughly the same lower bound that you get from a chip texturing
directly from AGP memory. A direct AGP texture gains the benefit of
fine-grained rendering overlap, but loses the benefit of subsequent
references being in faster memory (outside of small on-chip caches).
A direct AGP texture engine doesn�t have the higher upper bounds of a
cached texture engine, though. It�s best and worst case are similar
(generally a good thing), but the cached system can bring several times
more bandwidth to bear when it isn�t forced to swap anything in.

The important point is that the lower performance bound is almost an
order of magnitude faster than swapping in the textures as a unit by
the driver.

If you just positively couldn�t deal with the chance of that much worst
case delay, some form of mip level biasing could be made to kick in, or
you could try and do pre-touching, but I don�t think it would ever be
worth it. The worst imaginable case is acceptable, and you just won�t
hit that case very often.

Unless a truly large number of TLB are provided, the textures would
need to be blocked. The reason is that with a linear texture, a 4k
page maps to only a couple scan lines on very large textures. If you
are going with the grain you get great reuse, but if you go across it,
you wind up referencing a new page every couple texel accesses. What
is wanted is an addressing mechanism that converts a 4k page into a
square area in the texture, so the page access is roughly constant for
all orientations. There is also a benefit from having a 128 bit access
also map to a square block of pixels, which several existing cards
already do. The same interleaving-of-low-order-bits approach can just
be extended a few more bits.

Dealing with blocked texture patterns is a hassle for a driver writer,
but most graphics chips have a host blit capability that should let the
chip deal with changing a linear blit into blocked writes. Application
developers should never know about it, in any case.

There are some other interesting things that could be done if the page
tables could trigger a cpu interrupt in addition to being automatically
backed by AGP or PCI memory. Textures could be paged in directly from
disk for truly huge settings, or decompressed from jpeg blocks, or even
procedurally generated. Even the size limits of the AGP aperture could
usefully be avoided if the driver wanted to manage each page�s

Aside from all the basic swapping issue, there are a couple of other
hardware trends that push things this way.

Embedded dram should be a driving force. It is possible to put several
megs of extremely high bandwidth dram on a chip or die with a video
controller, but won�t be possible (for a while) to cram a 64 meg
geforce in. With virtualized texturing, the major pressure on memory
is drastically reduced. Even an 8mb card would be sufficient for 16
bit 1024x768 or 32 bit 800x600 gaming, no matter what the texture load.

The only thing that prevents a geometry processor based card from
turning almost any set of commands in a display list into a single
static dma buffer is the fact that textures may be swapped in and out,
causing the register programming in the buffer to be wrong. With
virtual texture addressing, a texture�s address never changes, and an
arbitrarily complex model can be described in a static dma buffer.


Wrecking Slade’s Development System

Filed under: — johnc @ 4:35 pm

Some people took it upon themselves to remotely wreck Slade’s development
system. That is no more defensible than breaking into Id and smashing

The idea isn’t to punish anyone, it is to have them comply with the license
and continue to contribute. QuakeLives has quite a few happy users, and it
is in everyone’s best interest to have development continue. It just has to
be by the rules.



Filed under: — johnc @ 4:53 pm

This is a public statement that is also being sent directly to Slade at
QuakeLives regarding

I see both sides of this. Your goals are positive, and I understand the issues
and the difficulties that your project has to work under because of the GPL.
I have also seen some GPL zealots acting petty and immature towards you very
early on (while it is within everyone’s rights to DEMAND code under the GPL, it
isn’t necessarily the best attitude to take), which probably colors some of your
views on the subject.

We discussed several possible legal solutions to the issues.

This isn’t one of them.

While I doubt your “give up your rights” click through would hold up in court,
I am positive that you are required to give the source to anyone that asks for
it that got a binary from someone else. This doesn’t provide the obscurity
needed for a gaming level of security.

I cut you a lot of slack because I honestly thought you intended to properly
follow through with the requirements of the GPL, and you were just trying to
get something fun out ASAP. It looks like I was wrong.

If you can’t stand to work under the GPL, you should release the code to your
last binary and give up your project. I would prefer that you continue your
work, but abide by the GPL.

If necessary, I will pay whatever lawyer the Free Software Foundation
reccomends to pursue this.


Anti-Cheat QW Proxy

Filed under: — johnc @ 3:10 pm

Several people have mentioned an existing anti-cheat QW proxy that
should also be applicable to modified versions:


Q3 on a 28.8 Modem

Filed under: — johnc @ 6:47 pm

I have been playing a lot of Q3 on a 28.8 modem for the last several days.

I finally found a case of the stuck-at-awaiting-gamestate problem that
turned out to be a continuous case of a fragment of the gamestate getting
dropped. I have changed the net code to space out the sending of the
fragments based on rate.

Note that there have been a few different things that result in stuck
at gamestate or stuck at snapshot problems. We have fixed a few of them,
but there may well still be other things that we haven’t found yet.

You can still have a fun game on a 28.8 modem. It is a significant
disadvantage, no question about it, but you can still have a good game if
you play smart. If there is someone that knows what they are doing on a
server with a ping in the low 100s, there won’t usually be much you can
do, but a skilled modem player can still beat up on unskilled T1 players…

Make sure your modem rate is set correctly. If you have it set too high,
large amounts of data can get buffered up and you can wind up with multiple
seconds of screwed up delays.

Only play on servers with good pings. My connection gives me a couple dozen
servers with mid 200 pings. 56k modems often see servers with sub 200 pings.
If you ignore the ping and just look for your favorite map, you will probably
have a crappy game.

If you have a good basic connection to the server, the thing that will mess
up your game is too much visible activity. This is a characteristic of the
number of players, the openness of the level, and the weapons in use.

Don’t play on madhouse levels with tons of players. None of the normal Q3
maps were really designed for more than eight players, and many were only
designed for four.

Don’t play in the wide open maps unless there are only a couple other
players. Four very active players in a wide open area are enough to bog
down a modem connection.

I just implemented “sv_minPing” / “sv_maxPing” options so servers can restrict
themselves to only low ping or high ping players. This is done based on the
ping of the challenge response packet, rather than any in-game pings. There
are a few issues with that – a LPB may occasionally get into a HPB server
if they happen to get a network hiccup at just the right time, and the number
used as a gate will be closer to the number shown in the server list, rather
than the number seen in gameplay. I would reccomend “sv_minPing 200″ as a
reasonable breakpoint.


Quake 1 Source Code and Cheating

Filed under: — johnc @ 10:58 pm

There are a number of people upset about the Quake 1 source
code release, because it is allowing cheating in existing games.

There will be a sorting out period as people figure out what directions
the Quake1 world is going to go in with the new capabilities, but it
will still be possible to have cheat free games after a few things get
worked out.

Here’s what needs to be done:

You have to assume the server is trusted. Because of the wau quake
mods work, It has always been possible to have server side cheats
along the lines of “if name == mine, scale damage by 75%". You have
to trust the server operator.

So, the problem then becomes a matter of making sure the clients are
all playing with an acceptable version before allowing them to connect
to the server. You obviously can’t just ask the client, because if it
is hacked it can just tell you what you want to hear. Because of the
nature of the GPL, you can’t just have a hidden part of the code to do

What needs to be done is to create two closed source programs that act
as executable loaders / verifiers and communication proxies for the
client and server. These would need to be produced for each platform
the game runs on. Some modifications will need to be done to the
open source code to allow it to (optionally) communicate with these

These programs would perform a robust binary digest of the programs they
are loading and communicate with their peer in a complex encrypted
protocol before allowing the game connection to start. It may be
possible to bypass the proxy for normal packets to avoid adding any
scheduling or latency issues, but it will need to be involved to some
degree to prevent a cheater from hijacking the connection once it is

The server operator would determine which versions of the game are to
be allowed to connect to their server if they wish to enforce proxy
protection. The part of the community that wants to be competetive
will have to agree to some reasonable schedule of adoption of new

Nothing in online games is cheat-proof (there is allways the device
driver level of things to hack on), but that would actually be more
secure than the game as it originally shipped, because hex edited patches
wouldn’t work any more. Someone could still in theory hack the closed
source programs, but that is the same situation everyone was in with
the original game.

People can start working on this immediately. There is some prior art
in various unix games that would probably be helpfull. It would also
be a good idea to find some crypto hackers to review proposed proxy
communication strategies.


Quake 1 Source Code

Filed under: — johnc @ 8:24 pm

The Q3 game source code is getting pushed back a bit because we had to do
some rearranging in the current codebase to facilitate the release, and
we don’t want to release in-progress code before the official binary point

We still have a Christmas present for the coders, though:

Happy holidays!


Honeymoon with Anna Kang

Filed under: — johnc @ 8:38 am

Anna Kang left Id a couple weeks ago to found her own company -
Fountainhead Entertainment.

It wasn’t generally discussed during her time at Id, but we had been going
out when she joined the company, and we were engaged earlier this year.
We are getting married next month, and honeymooning in Hawaii. At her
thoughtful suggestion, we are shipping a workstation out with us, so I don’t
fall into some programming-deprivation state. How great is that? :-)

Now that Q3A has shipped, the official winner of her Id Software figurine
chess set contest is Rowan Crawford for his prose and art.

An honorable mention goes to Reine Hogberg and Peder Hardings for their Q3A
Blair Witch Project. They will receive silver Q3A medallions.


Independant OpenGL Conformance Nazi

Filed under: — johnc @ 11:42 pm

WANTED: Independant OpenGL conformance nazi

I think there is a strong need for a proactive, vendor-neutral
OpenGL watchdog, or even a small group, especially in the linux space.

I have been working on the utah-GLX team for quite a while now, and while
I have been very pleased with the results, I would like to see more effort
spent on doing things as right as possible. Because the developers (me
included) are basically just doing the work in their spare time, testing
usually only consists of running their favorite OpenGL application, and a
few of the mesa demos, or some of the xscreensaver hacks.

Recently I did the initial bringup of a RagePro driver on linux, and I was
much more conscious of the large untested feature space, and the tunnel
vision I was using to get it to the point of running Q3.

What we need is someone, or a group of someones, who can really exercise
different implementations through all corners of the OpenGL specification
and provide detailed lists of faults with minimal test cases to reproduce
the behavior.

In most cases, the bugs could then be fixed, but even if it is decided that
the incorrect behavior is going to stay (to avoid a software fallback in a
common accelerated case), there would be clear documentation of it.

I consider performance on the matrox driver right now to be “good enough".
There is definately more performance oriented work going on, but given a
choice of tasks to work on, I would rather improve quality and coverage
instead of kicking a few more fps out of Q3.

One of Alex St. John’s valid points was that “The drivers are always broken".
There are a lot of factors that contribute to it, including fierce
benchmarking competition causing driver writers to do some debatable things
and diminish focus on quality. With open source drivers, some of those
factors go away. Sure, it is nice to beat windows drivers on some benchmarks,
but I wouldn’t let pursuit of that goal introduce dumb things into the code.

Some of the windows IHVs have good testing proceedures and high quality
drivers, but even there, it would be nice to have someone hounding them about
things beyond how well quake releated games run.

The same goes for Apple, especially now that there is both ATI and 3dfx

Conformance would be my primary interest, but characterizing the performance
of different drivers would also be usefull, especially for edge cases that may
or may not be accelerated, like glDrawPixels.

On linux right now, we have:

The traditional fullscreen 3dfx mesa driver
The DRI-GLX based banshee/voodoo3 driver
The utah-GLX matrox G200/G400 driver
The temporary utah-GLX nvidia driver
The newly born utah-GLX ATI Rage Pro driver

If anyone is interested, join the developer list off of:

Doing a proper job would require a really good knowledge of the OpenGL
specification, and a meticulous style, but it wouldn’t require hardcore
registers-and-dma driver writing skills, only basic glut programming.

If someone does wind up developing a good suite of tools and procedures and
gives one of the drivers a really good set of feedback, I would be happy to
provide extra video cards so they could beat up all the implementations.



Filed under: — johnc @ 7:47 pm

d:>mkdir research


I am very happy with how Q3 turned out. Probably more than any
game we have done before, it’s final form was very close to its
initial envisioning.

I will be getting all the Q3 code we are going to release together
over the next week or so. I will write some overview documentation
to give a little context, and since you can do game mods without
needing a commercial compiler now, I will write a brief step-by-step
to modifying the game code.

I’m looking forward to what comes out of the community with Q3.

The rough outline of what I am going to be working on now:

We will be supporting Q3 for quite some time. Any problems
we have will get fixed, and some new features my sneak in.

I have two rendering technologies that I intend to write research
engines for.

I am going to spend some time on computer vision problems. I think
the cheap little web cams have some interesting possibilities.

I am going to explore some possibilities with generalizing 3D game
engines into more powerful environments with broader uses. I think
that a lot of trends are coming to the point where a “cyberspace” as
it is often imagined is begining to be feasible.

I am going to spend more time on some Free Software projects. I have
been stealing a few hours here and there to work on the matrox glx
project for a while now, and it has been pretty rewarding.
People with an interest in the guts of a 3D driver might want to look
at the project archives at
The web pages aren’t very up to date, but the mailing list covers some
good techie information.


Not-Telefragging Bug

Filed under: — johnc @ 11:44 pm

* fixed not-telefragging bug
* disabled flood protection with local clients
* fixed headoffset and gender on some model changes
* init cg_autoswitch cvars in cl
* fixed clearing of vm bss on restart
* fixed hang when looped part of song isn’t found
* fixed two NAT clients connecting to same server
* fixed warning on random once only triggers
* added “g_allowVote 0″
* added developer background sound underrun warning
* move sound loading before clients so low memory
defer works across maps
* changed cgame load failure to a drop


Linux Version

Filed under: — johnc @ 2:48 am

Linux version isn’t going to make it tonight. We got
too busy with other things. Sorry. Tomorrow.

* shrink zone, grow hunk
* flush memory on an error
* fixed crash pasting from clipboard
* test all compiler optimizations – 5% speedup
* fixed major slowdown in team games with large
numbers of players and location markers


Mac Version is Out

Filed under: — johnc @ 1:59 am

The mac version is out. Go to for links.

The mac version going out has the executable fixes that we
have made in the last couple days, but most of the fixes
have been in code that runs in the virtual machine, and we
can’t update that without making it incomptable with the
pc version.

The game remains very marginal in playability on 266mhz imacs
and iBooks.

A 333mhz imac should be playable for a casual gamer if the
graphics options are turned down to the “fastest” setting.

There is still a lot of room for improvement on ATI’s side
with the RagePro drivers. Almost all the effort so far
has been on the Rage128 drivers.

The G3 systems run fine, but a little slower than a pc of
equal mhz

The rage128 cards in the G3s are only clocked at 75mhz, so
you can’t run too high of a resolution, but you can get
very nice image quality. I usually play with these settings:
r_mode 2 // 512*284 res
r_colorbits 32 // 32 bit color
r_texturemode gl_linear_mipmap_linear // trilinear filtering

I haven’t played on one of the new iMacs or G4’s but they
both use the rage128 driver, which is fairly high quality
now, so they should perform well.

We found a fairly significant problem with inputSprockets and
mouse control (motion is dropped after 40msec). I have done a
little working around it, so mouse control should be somewhat
better in this version, but it will hopefully be fixed
properly by Apple in the next IS rev. It isn’t an issue if
your framerate is high enough, but iMacs never see that
framerate on their very best days…

Linux version tomorrow night, if nothing horrible happens.

Some advance warning about something that is sure to stir
up some argument:

We should be handing off the masters for all three platforms
within a day or two of each other, but they aren’t going to
show up in stores at the same time. Publishers, distributers,
and stores are willing to go out of their way to expedite the
arrival of the pc version, but they just won’t go to the
same amount of trouble for mac and linux boxes.

DOWNLOAD UNTIL AFTER CHRISTMAS. This means that if you want
to play on the mac or linux, don’t pick up a copy of the pc
version and expect to download the other executables.

Our first update to the game will be for all platforms, and
will allow any version to be converted into any other, but
we intend to hold that off for a little while.

We are doing this at the request of the distributors. The
fear is that everyone will just grab a windows version,
and the separate boxes will be ignored.

A lot of companies are going to be watching the sales
figures for the mac and linux versions of Q3 to see if
the platforms are actually worth supporting. If everyone
bought a windows version and the other boxes sold like crap
in comparison, that would be plenty of evidence for most
executives to can any cross platform development.

I know there are a lot of people that play in both windows
and linux, and this may be a bit of an inconvenience in
the short term, but this is an ideal time to cast a vote
as a consumer.

Its all the same to Id (I like hybrid CD’s), and our continued
support of linux and mac (OS X for the next title) is basically
a foregone conclusion, but the results will probably influence
other companies.

* fixed getting your own dropped / kicked message
* added developer print for all file open write’s
* fixed occasional bad color on connecting background
* fixed occasional telefrag at start of skirmish game
* fix not being able to ready at intermission if
you were following a bot
* never timelimit during tourney warmup
* fixed local timer on map_restart
* offset sorlag’s head model for status bar
* added g_gametype to the votable commands:
map, map_restart, kick, g_gametype
* changed sound registration sequence to fix losing
default sound
* “sv_floodProtect 0″ to turn off flood protection
* converted sequenced messages to static arrays
* fixed custom skin reassignment on all LOD


Demo Servers Flood-Protection

Filed under: — johnc @ 10:22 pm

The demo servers have general purpose flood-protection that has
caused some confusion.

Clients are only allowed to make one command a second of any kind.
This prevents excessive flooding by chats, model / name changes,
and any other command that could possibly be exploited. The
command streams are stalled, so it doesn’t have any effect on
processing order or reliability.

This means that if you issue two commands immediately after one
another, there will be a one second stall before the second
command and all movement clears. You see this on the lagometer
as yellow spiking up for a second, then dropping away.

Hitting tab for the scoreboard sends a command, so you trigger
the flood protection if you bang tab a couple times. This has
been fixed so that the scoreboard will never send more than
one update request every two seconds, but you will need to
watch out for it in the existing demo.

The defered model loading has also caused some confusion, but
that is a feature, not a bug. :-)

In previous tests, you hitched for a second or two whenever a
client with a new model or skin joined a game.

In the demo, when a client joins the game they will be given
the model and skin of someone else temporarily, so there is
no hitch. The only time it will hitch on entry is if it is
a team game and there isn’t anyone on the team they join. I
make sure the skin color is correct, even if the model isn’t.

These “defered” clients will be loaded when you bring up the
scoreboard. You can do this directly by hitting tab, or you
can have it happen for you when you die.

The point is: you died BEFORE it hitched, not as a result of
the hitch.

The scoreboard header is up, but it is still a bit easy to miss.

* fixed high server idle cpu usage
(it was spinning in place until maxfps was used!)
* fixed g_password, which is crashing in the demo
* moved svs.snapshotEntities to the hunk
* enable lagometer whenever running a non-local game
* cg_drawTeamOverlay cvar, set to 0 by default
* finished authorize work
* better reporting of unused highwater memory


Vertex Lighting in the Existing Demos

Filed under: — johnc @ 5:02 am

The way vertex lighting is working in the existing demos is that
only two pass shaders (lightmap * texture) were collapsed to a
single pass, all other shaders stayed the same.

Xian added some chrome and energy effects to parts of q3tourney2,
which changed them from two pass to three pass shaders. We felt
that that 50% increase on those polygons was justified in normal
play, but as people have pointed out, when you are playing with
vertex lighting, that three passes stays three passes instead
of collapsing to a single pass, resulting in a 300% increase
on those polygons over the way it was before. Still faster than
lightmap mode, but a large variance over other parts of the level.

Today I wrote new code to address that, and improve on top of it.

Now when r_vertexlight is on, I force every single shader to a
single pass. In the cases where it isn’t a simple light*texture
case, I try and intelligently pick the most representative pass
and do some fixups on the shader modulations.

This works our great, and brings the graphics load down to the
minimum we can do with the data sets.

Performance is still going to be down a couple msec a frame due to
using dynamic compilation instead of dll’s for the cgame, but that
is an intentional tradeoff. You can obviously slow things down by
running a lot of bots, but that is to be expected.

I am still investigating the high idle dedicated server cpu utilization
and a few other issues. The server cpu time will definately be
higher than 1.08 due to the dynamic compiler, but again, that is
an intentional tradeoff.

A set of go-fast-and-look-ugly options:
r_mode 2
r_colorbits 16
r_vertexlighting 1
r_subdivisions 999
r_lodbias 2
cg_gibs 0
cg_draw3dicons 0
cg_brassTime 0
cg_marks 0
cg_shadows 0
cg_simpleitems 1
cg_drawAttacker 0

* icons for bot skills on scoreboard
* r_vertexlight is now “force single pass” for all shaders
* modified cd key check to be fire and forget on the client
* file handle debugging info in path command
* network address type of NA_BAD for failed resolves
* better command line variable overriding
* cache scoreboard for two seconds
* sync sound system before starting cinematics
* fixed many escapes disconnect from server exiting the game
* fixed shotgun pellets underwater expending all temp entities


Q3 Demo Test

Filed under: — johnc @ 7:41 pm

The demo test is built. It should go up in a couple
hours if nothing explodes.

Mac and linux builds won’t be out tonight.

* clear SVF_BOT when exiting follow mode
* render temp memory
* new mac GL initialization code
* no zone memory use in music thread
* added check for trash past zone block
* explicitly flush journal data file after a write
* added FS_Flush


Graphic for Defer

Filed under: — johnc @ 2:14 am

* graphic for defer
* don’t set any systeminfo vars from demos
* A3D fix
* spectator follow clients that disconnect
* stop follow mode before going to intermission so you can ready
* use (fullbright) vertex lighting if the bsp file doesn’t have lightmaps
* auto set demo keyword on servers
* finished cd key authorization
* fixed symbol table loading for interpreter
* reconnect command
* removed limit on number of completed commands
* changed default name to “UnnamedPlayer”
* awards over people’s heads in multiplayer
* fixed global powerup announcements


Teamplay Menu Comment

Filed under: — johnc @ 3:15 am

* teamplay menu comment
* shrank and moved “RECORDING demo:” text
* identified and worked around Apple input queue issue
* properly send configstring resets on map_restart
* don’t clip sound buffer on file writes
* don’t draw scoreboard during warmup
* auto load added bots in single player
* swapped order of map_restart and warmup configstring
* disable dynamic lights on riva128 due to lack of blend mode
* put frags left warning back in all gametypes
* removed joystick button debug prints


Spinning Barrel on Respawn

Filed under: — johnc @ 10:08 pm

* fixed spinning barrel on respawn issue
* clear eflags before intermission
* shutdown menu on starting a cinematic
* mask name colors to 0-7 range
* fixed jpeg loading alpha channel
* try for not-nearest spawn twice instead of once
* made unzoomed exactly identity mouse modifier
* cl_debugmove [1/2]
* m_filter
* fixed time warnings
* allow timelimits to hit with only a single player
* filter local games with different protocol versions
* fixed bad arg 0 after sysinfo configstring
* removed unneeded svc_servercommand at start of command strings
* fixed redundantly loaded level bug
* fixed journal playback from demo build
* removed background image from viewlog window


Bad Weapon Number

Filed under: — johnc @ 10:01 pm

* check for bad weapon number in non-3d ammo icon on death
* fixed plane catagorization
* error command does an ERR_DROP if given a parm
* don’t load high LOD models if r_lodbias
* nobots/nohumans options for player spawn spots
* prevent voice rewards on frag that enters intermission
* dissallow native dll loading if sv_pure
* loaddefered cgame command, issued on addbot
* drop the weapon you are changing TO if you only had a
MG or gauntlet
* fixed bounce pad event prediction for all angles
* allow empty tokens in map files
* fixed infos exceeded warning on bot parse
* warning on mismatched mipmap/picmip/wrapclamp image reuse
* fixed pain echo sound after predicted falling damage
* move sound to hunk
* move vm to hunk
* restart game vm in place for map_restarts
* avoid all lightmaps entirely when using vertex light
* pretouch all images after registration
* pretouch all known in-use memory before starting a game
* on error, shutdown client before server, to be more likely
to get out of fullscreen before a recursive error
* new pre-allocated memmory manager to crutch up mac
* meminfo command replaces hunk_stats and z_stats
* adjusted scoreboard titles
* no guantlet reward on corpses
* fixed snd_restart when paused


Cd Check in Single Player

Filed under: — johnc @ 2:52 am

* cd check in single player
* removed drop shadow on console input line
* swapped mynx pain sounds
* fixed cleared music buffer on track loop again
* force skins on spectators in team games to prevent having a
default waste memory
* only defer to a model with same team skin
* fixed grenade-disappearing-at-floor bug when about to explode
* draw reward medals during gameplay
* added “humiliation” feedback for gauntlet kills
* spread respawn times to prevent pattern running:
#define RESPAWN_ARMOR 25
#define RESPAWN_AMMO 40


Remove Grapple from Give All

Filed under: — johnc @ 1:59 am

* remove grapple from give all
* fixed pick-up-two-healths-even-if-you-don’t-need-the-second bug
* moved wrap/clamp to image generation function and added to imagelist
fixed an improper clamp on macs
* different menuback for ragepro
* fixed mac button problems with OS9 and wheel mice
* teamplay rule mods:
less MG damage (5 instead of 7)
weapons always have a full load of ammo
don’t drop powerups
* changed low detail r_subdivisions to 25 to prevent poke through
* removed warning on empty servercommand when systeminfo changes
* g_debugDamage
* when a vote is tied with all votes in, immediately fail it
* haste smoke puffs


Play Chat Sound During Votes

Filed under: — johnc @ 2:43 pm

* play chat sound during votes
* draw 2D icon for ammo if cg_draw3dicons 0
* fixed losing input on menu vid_restart
* made “vote” and “callvote” completable
* remove mac about dialog
* fixed demos skipping inital time due to loading
* fixed timing on timedemo startup
* don’t flash attacker when damaging self
* display capturelimit instead of fraglimit on score tabs in ctf
* recursive mirror/portal changed to a developer warning
* fixed bug with follow mode spectators
* battle suit shader
* notsingle spawn option
* separated torso and legs priority animation coutners
so gesture doesn’t mess with legs
* adjusted value to prevent missed launch sound
on accelerator pads
* setviewpos x y z yaw
same parms as viewpos command
* stop sound on file access
* fixed developer prints from renderer
* defered client media loading, only load models and sounds for
new players when you die or bring up the scoreboard
* fix for double colliding against same plane and getting stuck
* dropped LG damage to 160 pts / sec
* don’t snap predicted player entity, smooths deaths and movers
* all sine movers are instant kill on block
* fixed items riding on bobbers


Robert Duffy

Filed under: — johnc @ 3:37 pm

An announcement:

We have hired Robert Duffy as a full time employee, starting in

He has been maintaining the level editor since the release of Q2,
but a number of things have convinced me it is time to have a
full time tool-guy at id.

The original QE4 editor was my very first ever Win32 program, and
while Robert has done a good job of keeping it on life support
through necessary extensions and evolutions, it really is past
its designated lifespan.

I want to take a shot at making the level editor cross platform,
so it can be used on linux and mac systems. I’m not exactly
confident that it will work out, but I want to try.

Many of the content creation tasks seem to be fairly stabilized
over the last several years, and our next product is going to be
pretty content directed, so I can justify more engineering
effort on writing better tools.

It is time for a re-think of the relationships between editor,
utilities, and game. I am still an opponent of in-game editors,
but I want to rearrange a lot of things so that some subsystems
can be shared.

All of that added up to more than I was going to be able to do
in the time left after the various research, graphics, and
networking things I want to pursue.

* added r_speeds timing info to cinematic texture uploads
* fixed loop sounds on movers
* new bfg sound
* “sv_pure 1″ as default, requires clients to only get data from
pk3 files the server is using
* fixed fog pass on inside of textured fog surfaces
* properly fog sprites
* graphics for scoreboard headers
* show colored names on attacker display and scoreboard
* made “no such frame” a developer only warning
* count a disconnect while losing in tournement mode as a win
for the other player
* fixed running with jump held down runs slow
* draw “connection problems” in center of screen in addition
to phone jack icon
* cut marks properly on non-convex faces
* fixed bug with command completion partial match and case sensitivity
* fixed console dropping on level start
* fixed frags left feedback after restarts
* fog after dlight
* removed fogged stages from shader, dynamically generate
* removed fogonly shader keyword, infer from surfaceparm
* removed uneeded reinit of shader system on vid_restart


Full “Demo” Release for Quake 3

Filed under: — johnc @ 9:36 pm

The next release will be the full “demo” release for quake 3.
It will include bots and a new, simple level that is suitable for
complete beginners to play, as well as the existing q3test maps.

The timing just didn’t work out right for another test before we
complete the game.

We plan on releasing the demo after code freeze, when the entire
game is in final testing, which will give us a few days time to
fix any last minute problems that show up before golden master.

No, I don’t have an actual date when that will be.

I got an iBook in on friday. It is sort of neat (I need to go
buy an AirPort, then it will definately be neat), but it is
currently way too slow to play Q3 on.

Apple’s high end G3 and G4 systems with rage128/rage128pro cards
and latest drivers are just about as fast as equivelant wintel systems,
but the rest of the product line is still suffering a noticable
speed disadvantage.

The new iMac has a rage128, but it is only an 8mb/64bit version. Still,
with agp texturing it is a solid platform with performance that is
certainly good enough to play the game well on.

Existing iMacs have 6mb ragePro video. ATI’s windows drivers for
the pro have come a long ways, and Q3 is plenty playbale on
windows with a rage pro (ugly, but playable). On apple systems
(iMacs and beige G3’s), the performance is sometimes as low as HALF
the windows platform. The lack of AGP contributes some to this, but
it is mostly just a case of the drivers not being optimzed yet. The
focus has been on the rage128 so far.

The iBook is also ragePro based, but it is an ultra-cheap 32 bit
version. It does texture over AGP, but it is slooooow. I suspect it
is still driver issues, because it is still very slow even at 320x240,
so that leaves hope that it can be improved.

Another issue with the Apple systems is that Apple 16 bit color is
actually 15 bit. I never used to think it made much difference, but
I have been doing a lot of side by side comparying, and it does turn
out to be a noticable loss of quality.

* new lg splash
* added channel number for local sounds so feedbacks
don’t override announcers
* remvoed scoreup feedback sound
* expand score tabs as needed for large scores
* fixed bfg obit
* fixed swimming off floors
* fixed swim animation when jumping in shallow water
* fixed first weapopn switch problem
* convert all joystick axis to button events (twist is now bindable)


Make Sure Video is Shutfown

Filed under: — johnc @ 5:08 pm

* make sure video is shutfown for errors on startup
* automatic fallback to vm if dll load fails
* compressed jump tables for qvm
* removed common qfiles.h and surfaceflags.h from utils and game
* don’t load qvm symbols if not running with developer
* “quake3 safe” will run the game without loading q3config.cfg
* ignore network packets when in single player mode
* dedicated server memory optimizations. Tips:
com_hunkMegs 4
sv_maxclients 3
bot_enable 0
* fixed logfile on mac
* new time drifting code
* fixed file handle leak with compressed pk3 files
* q3data changed to remove shader references from player models
* throw a fatal error if drop errors are streaming in
* fixed com_hunkMegs as command line parm
* spawn spectators at intermission point
(info_spectator_start has been removed)
* new sound channel for local sounds
* fixed follow toggle on bots
* don’t write to games.log in single player
* fixed improper case sensitivity in S_FindName


Handle Window Close Events Properly

Filed under: — johnc @ 3:00 pm

* handle window close events properly
* enable r_displayRefresh selection and feedback on mac
* colorized railgun muzzle flash, impact flash, and mark
* exactly synced animating textures with waveforms and collapsed
all explosion sequences into single shaders
* removed unneeded black pass on hell skies
* fixed grenades sticking to steep slopes
* scan for next highest fullscreen resolution when exact
mode fails (fixes SGI flat panel failing to init)
* all cgame cvars now have a cg_ prefix (crosshair, fov, etc)
* clear clientinfo before parsing configstring
* make all feedback voiceovers share the same channel
* fixed nodraw curves
* fixed obits from shooter entities
* fixed chat char issue
* separate gentity_t fields into sharedEntity_t
* reintegrated q_mathsys with q_math
* cg_forcemodel also forces player sounds
* unknown cmd commands don’t chat
* fixed strip order on text quads


R_primitives 3 Path for Non-Vertex Array Testing

Filed under: — johnc @ 12:09 am

* r_primitives 3 path for non-vertex array testing
* specify sex in model animation.cfg file
* proper dropping of failed bot inits
* removed identical pain sounds
* serverTime strictly increasing across levels
* added GL_DECAL multitexture collapse
* windowed mouse on mac
* fixed byte order issue with curve clipping on mac
* made com_defaultextension buffer safe
* fixed levelshot and added antialiasing to image
* don’t clear bot names before kick message
* made servercommand sequences strictly increasing across
level changes
* unpause on vid_restart


Fixed Steady Snapshot Test

Filed under: — johnc @ 11:14 pm

* fixed steady snapshot test
* fixed incorrect 0 ping if past client messages
* fixed loser-disconnecting-at-tourney-intermission
sorting problem
* general purpose flood protection, limiting all user
commands to one a second by stalling the client,
so the commands don’t actually get dropped, but
are delayed as needed
* replace headnode overflow with lastCluster
* fixed bad extrapolation on unpausing
* fixed player twitch on unpausing
* print client names on loading screen


The New G3 Mac Hardware

Filed under: — johnc @ 3:48 pm

Ok, many of you have probably heard that I spoke at the macworld
keynote on tuesday. Some information is probably going to get
distorted in the spinning and retelling, so here is an info
dump straight from me:

Q3test, and later the full commercial Quake3: Arena, will be simultaniously
released on windows, mac, and linux platforms.

I think Apple is doing a lot of things right. A lot of what they are
doing now is catch-up to wintel, but if they can keep it up for the next
year, they may start making a really significant impact.

I still can’t give the mac an enthusiastic reccomendation for sophisticated
users right now because of the operating system issues, but they are working
towards correcting that with MacOS X.

The scoop on the new G3 mac hardware:

Basically, its a great system, but Apple has oversold its
performance reletive to intel systems. In terms of timedemo scores,
the new G3 systems should be near the head of the pack, but there
will be intel systems outperforming them to some degree. The mac has
not instantly become a “better” platform for games than wintel, it
has just made a giant leap from the back of the pack to near the

I wish Apple would stop quoting “Bytemarks". I need to actually
look at the contents of that benchmark and see how it can be so
misleading. It is pretty funny listening to mac evangelist types
try to say that an iMac is faster than a pentium II-400. Nope.
Not even close.

From all of my tests and experiments, the new mac systems are
basically as fast as the latest pentium II systems for general
cpu and memory performance. This is plenty good, but it doesn’t
make the intel processors look like slugs.

Sure, an in-cache, single precision, multiply-accumulate loop could
run twice as fast as a pentium II of the same clock rate, but
conversly, a double precision add loop would run twice as fast
on the pentium II.

Spec95 is a set of valid benchmarks in my opinion, and I doubt the
PPC systems significantly (if at all) outperform the intel systems.

The IO system gets mixed marks. The 66 mhz video slot is a good step
up from 33 mhz pci in previous products, but that’s still half the
bandwidth of AGP 2X, and it can’t texture from main memory. This
will have a small effect on 3D gaming, but not enough to push it
out of its class.

The 64 bit pci slots are a good thing for network and storage cards,
but the memory controller doesn’t come close to getting peak
utilization out of it. Better than normal pci, though.

The video card is almost exactly what you will be able to get on
the pc side: a 16 mb rage-128. Running on a 66mhz pci bus, it’s
theoretical peak performance will be midway between the pci and
agp models on pc systems for command traffic limited scenes. Note
that current games are not actually command traffic limited, so the
effect will be significantly smaller. The fill rates will be identical.

The early systems are running the card at 75 mhz, which does put
it at a slight disadvantage to the TNT, but faster versions are
expected later. As far as I can tell, the rage-128 is as perfect
as the TNT feature-wise. The 32 mb option is a feature ATI can
hold over TNT.

Firewire is cool.

Its a simple thing, but the aspect of the new G3 systems that
struck me the most was the new case design. Not the flashy plastic
exterior, but the functional structure of it. The side of the
system just pops open, even with the power on, and lays the
motherboard and cards down flat while the disks and power supply
stay in the encloser. It really is a great design, and the benefits
were driven home yesterday when I had to scavenge some ram out of old
wintel systems yesterday – most case designs suck really bad.

I could gripe a bit about the story of our (lack of) involvement
with Apple over the last four years or so, but I’m calling that
water under the bridge now.

After all the past fiascos, I had been basically planning on ignoring Apple
until MacOS X (rhapsody) shipped, which would then turn it into a platform
that I was actually interested in.

Recently, Apple made a strategic corporate decision that games were a
fundamental part of a consumer oriented product line (duh). To help that
out, Apple began an evaluation of what it needed to do to help game

My first thought was “throw out MacOS", but they are already in the process
of doing that, and its just not going to be completed overnight.

Apple has decent APIs for 2D graphics, input, sound, and networking,
but they didn’t have a solid 3D graphics strategy.

Rave was sort of ok. Underspecified and with no growth path, but
sort of ok. Pursuing a proprietary api that wasn’t competetive with
other offerings would have been a Very Bad Idea. They could have tried
to make it better, or even invent a brand new api, but Apple doesn’t have
much credebility in 3D programming.

For a while, it was looking like Apple might do something stupid,
like license DirectX from microsoft and be put into a guaranteed
trailing edge position behind wintel.

OpenGL was an obvious direction, but there were significant issues with
the licensing and implementation that would need to be resolved.

I spent a day at apple with the various engineering teams and executives,
laying out all the issues.

The first meeting didn’t seem like it went all that well, and there wasn’t
a clear direction visible for the next two months. Finally, I got the all
clear signal that OpenGL was a go and that apple would be getting both the
sgi codebase and the conix codebease and team (the best possible arrangement).

So, I got a mac and started developing on it. My first weekend of
effort had QuakeArena limping along while held together with duct
tape, but weekend number two had it properly playable, and weekend
number three had it brought up to full feature compatability. I
still need to do some platform specific things with odd configurations
like multi monitor and addon controlers, but basically now its
just a matter of compiling on the mac to bring it up to date.

This was important to me, because I felt that Quake2 had slipped a bit in
portability because it had been natively developed on windows. I like the
discipline of simultanious portable development.

After 150 hours or so of concentrated mac development, I learned a
lot about the platform.

CodeWarrior is pretty good. I was comfortable devloping there
almost immediately. I would definately say VC++ 6.0 is a more powerful
overall tool, but CW has some nice little aspects that I like. I
am definately looking forward to CW for linux. Unix purists may
be aghast, but I have allways liked gui dev environments more than
a bunch of xterms running vi and gdb.

The hardware (even the previous generation stuff) is pretty good.

The OpenGL performance is pretty good. There is a lot of work
underway to bring the OpenGL performance to the head of the pack,
but the existing stuff works fine for development.

The low level operating systems SUCKS SO BAD it is hard to believe.

The first order problem is lack of memory management / protection.

It took me a while to figure out that the zen of mac development is
“be at peace while rebooting". I rebooted my mac system more times
the first weekend than I have rebooted all the WinNT systems I
have ever owned. True, it has gotten better now that I know my
way around a bit more, and the codebase is fully stable, but there
is just no excuse for an operating system in this day and age to
act like it doesn’t have access to memory protection.

The first thing that bit me was the static memory allocation for
the program. Modern operating systems just figure out how much
memory you need, but because the mac was originally dsigned for
systems without memory management, significant things have to be
laid out ahead of time.

Porting a win32 game to the mac will probably involve more work
dealing with memory than any other aspect. Graphics, sound, and
networking have reasonable analogues, but you just can’t rely
on being able to malloc() whatever you want on the mac.

Sure, game developers can manage their own memory, but an operating
system that has proper virtual memory will let you develop
a lot faster.

The lack of memory protection is the worst aspect of mac development.
You can just merrily write all over other programs, the development
environment, and the operating system from any application.

I remember that. From dos 3.3 in 1990.

Guard pages will help catch simple overruns, but it won’t do anything
for all sorts of other problems.

The second order problem is lack of preemptive multitasking.

The general responsiveness while working with multiple apps
is significantly worse than windows, and you often run into
completely modal dialogs that don’t let you do anything else at all.

A third order problem is that a lot of the interfaces are fairly

There are still many aspects of the mac that clearly show design
decisions based on a 128k 68000 based machine. Wintel has grown
a lot more than the mac platform did. It may have been because the
intel architecture didn’t evolve gracefully and that forced the
software to reevaluate itself more fully, or it may just be that
microsoft pushed harder.

Carbon sanitizes the worst of the crap, but it doesn’t turn it
into anything particularly good looking.

MacOS X nails all these problems, but thats still a ways away.

I did figure one thing out – I was always a little curious why
the early BeOS advocates were so enthusiastic. Coming from a
NEXTSTEP background, BeOS looked to me like a fairly interesting
little system, but nothing special. To a mac developer, it must
have looked like the promised land…


64 Bit Pointer Environments

Filed under: — johnc @ 3:31 pm

I got several vague comments about being able to read “stuff” from shared
memory, but no concrete examples of security problems.

However, Gregory Maxwell pointed out that it wouldn’t work cross platform
with 64 bit pointer environments like linux alpha. That is a killer, so
I will be forced to do everything the hard way. Its probably for the
best, from a design standpoint anyway, but it will take a little more effort.


Virtual Machine Implementation

Filed under: — johnc @ 3:57 pm

I am considering taking a shortcut with my virtual machine implementation
that would make the integration a bit easier, but I’m not sure that it
doesn’t compromise the integrity of the base system.

I am considering allowing the interpreted code to live in the global address
space, instead of a private 0 based address space of its own. Store
instructions from the VM would be confined to the interpreter’s address
space, but loads could access any structures.

On the positive side:

This would allow full speed (well, full interpreted speed) access to variables
shared between the main code and the interpreted modules. This allows system
calls to return pointers, instead of filling in allocated space in the
interpreter’s address space.

For most things, this is just a convenience that will cut some development
time. Most of the shared accesses could be recast as “get” system calls,
and it is certainly arguable that that would be a more robust programming

The most prevelent change this would prevent is all the cvar_t uses. Things
could stay in the same style as Q2, where cvar accesses are free and
transparantly updated. If the interpreter lives only in its own address
space, then cvar access would have to be like Q1, where looking up a
variable is a potentially time consuming operation, and you wind up adding
lots of little cvar caches that are updated every from or restart.

On the negative side:

A client game module with a bug could cause a bus error, which would not be
possible with a pure local address space interpreter.

I can’t think of any exploitable security problems that read only access to
the entire address space opens, but if anyone thinks of something, let me


Binary DLLs plus Interpreted Code

Filed under: — johnc @ 6:54 pm

More extensive comments on the interpreted-C decision later, but a quick
note: the plan is to still allow binary dll loading so debuggers can be
used, but it should be interchangable with the interpreted code. Client
modules can only be debugged if the server is set to allow cheating, but
it would be possible to just use the binary interface for server modules
if you wanted to sacrifice portability. Most mods will be able to be
implemented with just the interpreter, but some mods that want to do
extensive file access or out of band network communications could still
be implemented just as they are in Q2. I will not endorse any use of
binary client modules, though.


The Frag

Filed under: — johnc @ 4:29 am

This was the most significant thing I talked about at The Frag, so here it
is for everyone else.

The way the QA game architecture has been developed so far has been as two
seperate binary dll’s: one for the server side game logic, and one for the
client side presentation logic.

While it was easiest to begin development like that, there are two crucial
problems with shipping the game that way: security and portability.

It’s one thing to ask the people who run dedicated servers to make informed
decisions about the safety of a given mod, but its a completely different
matter to auto-download a binary image to a first time user connecting to a
server they found.

The quake 2 server crashing attacks have certainly proven that there are
hackers that enjoy attacking games, and shipping around binary code would
be a very tempting opening for them to do some very nasty things.

With quake and Quake 2, all game modifications were strictly server side,
so any port of the game could connect to any server without problems.
With Quake 2’s binary server dll’s not all ports could necessarily run a
server, but they could all play.

With significant chunks of code now running on the client side, if we stuck
with binary dll’s then the less popular systems would find that they could
not connect to new servers because the mod code hadn’t been ported. I
considered having things set up in such a way that client game dll’s could
be sort of forwards-compatable, where they could always connect and play,
but new commands and entity types just might now show up. We could also
GPL the game code to force mod authors to release source with the binaries,
but that would still be inconvenient to deal with all the porting.

Related both issues is client side cheating. Certain cheats are easy to do
if you can hack the code, so the server will need to verify which code the
client is running. With multiple ported versions, it wouldn’t be possible
to do any binary verification.

If we were willing to wed ourselves completely to the windows platform, we
might have pushed ahead with some attempt at binary verification of dlls,
but I ruled that option out. I want QuakeArena running on every platform
that has hardware accelerated OpenGL and an internet connection.

The only real solution to these problems is to use an interpreted language
like Quake 1 did. I have reached the conclusion that the benefits of a
standard language outweigh the benefits of a custom language for our
purposes. I would not go back and extend QC, because tThat stretches the
effort from simply system and interpreter design to include language design,
and there is already plenty to do.

I had been working under the assumption that Java was the right way to go,
but recently I reached a better conclusion.

The programming language for QuakeArena mods is interpreted ANSI C. (well,
I am dropping the double data type, but otherwise it should be pretty

The game will have an interpreter for a virtual RISC-like CPU. This should
have a minor speed benefit over a byte-coded, stack based java interpreter.
Loads and stores are confined to a preset block of memory, and access to all
external system facilities is done with system traps to the main game code,
so it is completely secure.

The tools necessary for building mods will all be freely available: a
modified version of LCC and a new program called q3asm. LCC is a wonderful
project – a cross platform, cross compiling ANSI C compiler done in under
20K lines of code. Anyone interested in compilers should pick up a copy of
“A retargetable C compiler: design and implementation” by Fraser and Hanson.

You can’t link against any libraries, so every function must be resolved.
Things like strcmp, memcpy, rand, etc. must all be implemented directly. I
have code for all the ones I use, but some people may have to modify their
coding styles or provide implementations for other functions.

It is a fair amount of work to restructure all the interfaces to not share
pointers between the system and the games, but it is a whole lot easier
than porting everything to a new language. The client game code is about
10k lines, and the server game code is about 20k lines.

The drawback is performance. It will probably perform somewhat like QC.
Most of the heavy lifting is still done in the builtin functions for path
tracing and world sampling, but you could still hurt yourself by looping
over tons of objects every frame. Yes, this does mean more load on servers,
but I am making some improvements in other parts that I hope will balance
things to about the way Q2 was on previous generation hardware.

There is also the amusing avenue of writing hand tuned virtual assembly
assembly language for critical functions…

I think this is The Right Thing.


Problems with .plan Updates

Filed under: — johnc @ 2:50 am

It has been difficult to write .plan updates lately. Every time I start
writing something, I realize that I’m not going to be able to cover it
satisfactorily in the time I can spend on it. I have found that terse
little comments either get misinterpreted, or I get deluged by email
from people wanting me to expand upon it.

I wanted to do a .plan about my evolving thoughts on code quality
and lessons learned through quake and quake 2, but in the interest
of actually completing an update, I decided to focus on one change
that was intended to just clean things up, but had a surprising
number of positive side effects.

Since DOOM, our games have been defined with portability in mind.
Porting to a new platform involves having a way to display output,
and having the platform tell you about the various relevant inputs.
There are four principle inputs to a game: keystrokes, mouse moves,
network packets, and time. (If you don’t consider time an input
value, think about it until you do – it is an important concept)

These inputs were taken in separate places, as seemed logical at the
time. A function named Sys_SendKeyEvents() was called once a
frame that would rummage through whatever it needed to on a
system level, and call back into game functions like Key_Event( key,
down ) and IN_MouseMoved( dx, dy ). The network system
dropped into system specific code to check for the arrival of packets.
Calls to Sys_Milliseconds() were littered all over the code for
various reasons.

I felt that I had slipped a bit on the portability front with Q2 because
I had been developing natively on windows NT instead of cross
developing from NEXTSTEP, so I was reevaluating all of the system
interfaces for Q3.

I settled on combining all forms of input into a single system event
queue, similar to the windows message queue. My original intention
was to just rigorously define where certain functions were called and
cut down the number of required system entry points, but it turned
out to have much stronger benefits.

With all events coming through one point (The return values from
system calls, including the filesystem contents, are “hidden” inputs
that I make no attempt at capturing, ), it was easy to set up a
journalling system that recorded everything the game received. This
is very different than demo recording, which just simulates a network
level connection and lets time move at its own rate. Realtime
applications have a number of unique development difficulties
because of the interaction of time with inputs and outputs.

Transient flaw debugging. If a bug can be reproduced, it can be
fixed. The nasty bugs are the ones that only happen every once in a
while after playing randomly, like occasionally getting stuck on a
corner. Often when you break in and investigate it, you find that
something important happened the frame before the event, and you
have no way of backing up. Even worse are realtime smoothness
issues – was that jerk of his arm a bad animation frame, a network
interpolation error, or my imagination?

Accurate profiling. Using an intrusive profiler on Q2 doesn’t give
accurate results because of the realtime nature of the simulation. If
the program is running half as fast as normal due to the
instrumentation, it has to do twice as much server simulation as it
would if it wasn’t instrumented, which also goes slower, which
compounds the problem. Aggressive instrumentation can slow it
down to the point of being completely unplayable.

Realistic bounds checker runs. Bounds checker is a great tool, but
you just can’t interact with a game built for final checking, its just
waaaaay too slow. You can let a demo loop play back overnight, but
that doesn’t exercise any of the server or networking code.

The key point: Journaling of time along with other inputs turns a
realtime application into a batch process, with all the attendant
benefits for quality control and debugging. These problems, and
many more, just go away. With a full input trace, you can accurately
restart the session and play back to any point (conditional
breakpoint on a frame number), or let a session play back at an
arbitrarily degraded speed, but cover exactly the same code paths..

I’m sure lots of people realize that immediately, but it only truly sunk
in for me recently. In thinking back over the years, I can see myself
feeling around the problem, implementing partial journaling of
network packets, and included the “fixedtime” cvar to eliminate most
timing reproducibility issues, but I never hit on the proper global
solution. I had always associated journaling with turning an
interactive application into a batch application, but I never
considered the small modification necessary to make it applicable to
a realtime application.

In fact, I was probably blinded to the obvious because of one of my
very first successes: one of the important technical achievements
of Commander Keen 1 was that, unlike most games of the day, it
adapted its play rate based on the frame speed (remember all those
old games that got unplayable when you got a faster computer?). I
had just resigned myself to the non-deterministic timing of frames
that resulted from adaptive simulation rates, and that probably
influenced my perspective on it all the way until this project.

Its nice to see a problem clearly in its entirety for the first time, and
know exactly how to address it.


Dual-Processor Acceleration for QuakeArena

Filed under: — johnc @ 2:43 am

I recently set out to start implementing the dual-processor acceleration
for QA, which I have been planning for a while. The idea is to have one
processor doing all the game processing, database traversal, and lighting,
while the other processor does absolutely nothing but issue OpenGL calls.

This effectively treats the second processor as a dedicated geometry
accelerator for the 3D card. This can only improve performance if the
card isn’t the bottleneck, but voodoo2 and TNT cards aren’t hitting their
limits at 640*480 on even very fast processors right now.

For single player games where there is a lot of cpu time spent running the
server, there could conceivably be up to an 80% speed improvement, but for
network games and timedemos a more realistic goal is a 40% or so speed
increase. I will be very satisfied if I can makes a dual pentium-pro 200
system perform like a pII-300.

I started on the specialized code in the renderer, but it struck me that
it might be possible to implement SMP acceleration with a generic OpenGL
driver, which would allow Quake2 / sin / halflife to take advantage of it
well before QuakeArena ships.

It took a day of hacking to get the basic framework set up: an smpgl.dll
that spawns another thread that loads the original oepngl32.dll or
3dfxgl.dll, and watches a work que for all the functions to call.

I get it basically working, then start doing some timings. Its 20%
slower than the single processor version.

I go in and optimize all the queing and working functions, tune the
communications facilities, check for SMP cache collisions, etc.

After a day of optimizing, I finally squeak out some performance gains on
my tests, but they aren’t very impressive: 3% to 15% on one test scene,
but still slower on the another one.

This was fairly depressing. I had always been able to get pretty much
linear speedups out of the multithreaded utilities I wrote, even up to
sixteen processors. The difference is that the utilities just split up
the work ahead of time, then don’t talk to each other until they are done,
while here the two threads work in a high bandwidth producer / consumer

I finally got around to timing the actual communication overhead, and I was
appalled: it was taking 12 msec to fill the que, and 17 msec to read it out
on a single frame, even with nothing else going on. I’m surprised things
got faster at all with that much overhead.

The test scene I was using created about 1.5 megs of data to relay all the
function calls and vertex data for a frame. That data had to go to main
memory from one processor, then back out of main memory to the other.
Admitedly, it is a bitch of a scene, but that is where you want the

The write times could be made over twice as fast if I could turn on the
PII’s write combining feature on a range of memory, but the reads (which
were the gating factor) can’t really be helped much.

Streaming large amounts of data to and from main memory can be really grim.
The next write may force a cache writeback to make room for it, then the
read from memory to fill the cacheline (even if you are going to write over
the entire thing), then eventually the writeback from the cache to main
memory where you wanted it in the first place. You also tend to eat one
more read when your program wants to use the original data that got evicted
at the start.

What is really needed for this type of interface is a streaming read cache
protocol that performs similarly to the write combining: three dedicated
cachelines that let you read or write from a range without evicting other
things from the cache, and automatically prefetching the next cacheline as
you read.

Intel’s write combining modes work great, but they can’t be set directly
from user mode. All drivers that fill DMA buffers (like OpenGL ICDs…)
should definately be using them, though.

Prefetch instructions can help with the stalls, but they still don’t prevent
all the wasted cache evictions.

It might be possible to avoid main memory alltogether by arranging things
so that the sending processor ping-pongs between buffers that fit in L2,
but I’m not sure if a cache coherent read on PIIs just goes from one L2
to the other, or if it becomes a forced memory transaction (or worse, two
memory transactions). It would also limit the maximum amount of overlap
in some situations. You would also get cache invalidation bus traffic.

I could probably trim 30% of my data by going to a byte level encoding of
all the function calls, instead of the explicit function pointer / parameter
count / all-parms-are-32-bits that I have now, but half of the data is just
raw vertex data, which isn’t going to shrink unless I did evil things like
quantize floats to shorts.

Too much effort for what looks like a reletively minor speedup. I’m giving
up on this aproach, and going back to explicit threading in the renderer so
I can make most of the communicated data implicit.

Oh well. It was amusing work, and I learned a few things along the way.


Nvidia TNT / Blackjack No More

Filed under: — johnc @ 12:54 am

I just got a production TNT board installed in my Dolch today.

The riva-128 was a troublesome part. It scored well on benchmarks, but it had
some pretty broken aspects to it, and I never reccomended it (you are better
off with an intel I740).

There aren’t any troublesome aspects to TNT. Its just great. Good work, Nvidia.

In terms of raw speed, a 16 bit color multitexture app (like quake / quake 2)
should still run a bit faster on a voodoo2, and an SLI voodoo2 should be faster
for all 16 bit color rendering, but TNT has a lot of other things going for it:

32 bit color and 24 bit z buffers. They cost speed, but it is usually a better
quality tradeoff to go one resolution lower but with twice the color depth.

More flexible multitexture combine modes. Voodoo can use its multitexture for
diffuse lightmaps, but not for the specular lightmaps we offer in QuakeArena.
If you want shiny surfaces, voodoo winds up leaving half of its texturing
power unused (you can still run with diffuse lightmaps for max speed).

Stencil buffers. There aren’t any apps that use it yet, but stencil allows
you to do a lot of neat tricks.

More texture memory. Even more than it seems (16 vs 8 or 12), because all of the
TNT’s memory can be used without restrictions. Texture swapping is the voodoo’s
biggest problem.

3D in desktop applications. There is enough memory that you don’t have to worry
about window and desktop size limits, even at 1280*1024 true color resolution.

Better OpenGL ICD. 3dfx will probably do something about that, though.

This is the shape of 3D boards to come. Professional graphics level
rendering quality with great performance at a consumer price.

We will be releasing preliminary QuakeArena benchmarks on all the new boards
in a few weeks. Quake 2 is still a very good benchmark for moderate polygon
counts, so our test scenes for QA involve very high polygon counts, which
stresses driver quality a lot more. There are a few surprises in the current

A few of us took a couple days off in vegas this weekend. After about
ten hours at the tables over friday and saturday, I got a tap on the shoulder…

Three men in dark suits introduced themselves and explained that I was welcome
to play any other game in the casino, but I am not allowed to play
blackjack anymore.

Ah well, I guess my blackjack days are over. I was actually down a bit for
the day when they booted me, but I made +$32k over five trips to vegas in the
past two years or so.

I knew I would get kicked out sooner or later, because I don’t play “safely".
I sit at the same table for several hours, and I range my bets around 10 to 1.


HDTV Style QuakeArena

Filed under: — johnc @ 10:13 pm

I added support for HDTV style wide screen displays in QuakeArena, so
24″ and 28″ monitors can now cover the entire screen with game graphics.

On a normal 4:3 aspect ratio screen, a 90 degree horizontal field of view
gives a 75 degree vertical field of view. If you keep the vertical fov
constant and run on a wide screen, you get a 106 degree horizontal fov.

Because we specify fov with the horizontal measurement, you need to change
fov when going into or out of a wide screen mode. I am considering changing
fov to be the vertical measurement, but it would probably cause a lot of
confusion if “fov 90″ becomes a big fisheye.

Many video card drivers are supporting the ultra high res settings
like 1920 * 1080, but hopefully they will also add support for lower
settings that can be good for games, like 856 * 480.

I spent a day out at apple last week going over technical issues.

I’m feeling a lot better about MacOS X. Almost everything I like about
rhapsody will be there, plus some solid additions.

I presented the OpenGL case directly to Steve Jobs as strongly as possible.

If Apple embraces OpenGL, I will be strongly behind them. I like OpenGL more
than I dislike MacOS. :)

Last friday I got a phone call: “want to make some exhibition runs at the
import / domestic drag wars this sunday?". It wasn’t particularly good
timing, because the TR had a slipping clutch and the F50 still hasn’t gotten
its computer mapping sorted out, but we got everything functional in time.

The tech inspector said that my cars weren’t allowed to run in the 11s
at the event because they didn’t have roll cages, so I was supposed to go

The TR wasn’t running its best, only doing low 130 mph runs. The F50 was
making its first sorting out passes at the event, but it was doing ok. My
last pass was an 11.8(oops) @ 128, but we still have a ways to go to get the
best times out of it.

I’m getting some racing tires on the F50 before I go back. It sucked watching
a tiny honda race car jump ahead of me off the line. :)

I think ESPN took some footage at the event.


Twin Turbo Vitamins

Filed under: — johnc @ 3:37 pm

My F50 took some twin turbo vitamins.

Rear wheel numbers:
602 HP @ 8200 rpm
418 ft-lb @ 7200 rpm

This is very low boost, but I got the 50% power increase I was looking for,
and hopefully it won’t be making any contributions to my piston graveyard.

There will be an article in Turbo magazine about it, and several other car
magazines want to test it out. They usually start out with “He did WHAT
to an F50???” :)

Brian is getting a nitrous kit installed in his viper, and Cash just got his
suspension beefed up, so we will be off to the dragstrip again next month to
sort everything out again.


Rhapsody DR2

Filed under: — johnc @ 2:57 am

I have spent the last two days working with Apple’s Rhapsody DR2, and I like
it a lot.

I was dissapointed with the original DR1 release. It was very slow and
seemed to have added the worst elements of the mac experience (who the hell
came up with that windowshade minimizing?) while taking away some of the
strengths of NEXTSTEP.

Things are a whole lot better in the latest release. General speed is up,
memory consumption is down, and the UI feels consistant and productive.

Its still not as fast as windows, and probably never will be, but I think the
tradeoffs are valid.

There are so many things that are just fundamentally better in the rhapsody
design than in windows: frameworks, the yellow box apis, fat binaries,
buffered windows, strong multi user support, strong system / local seperation,
netinfo, etc.

Right now, I think WindowsNT is the best place to do graphics development work,
but if the 3D acceleration issue was properly addressed on rhapsody, I think that
I could be happy using it as my primary development platform.

I ported the current Quake codebase to rhapsody to test out conix’s beta OpenGL.
The game isn’t really playable with the software emulated OpenGL, but it
functions properly, and it makes a fine dedicated server.

We are going to try to stay on top of the portability a little better for QA.
Quake 2 slid a bit because we did the development on NT instead of NEXTSTEP,
and that made the irix port a lot more of a hassle than the original glquake port.

I plan on using the rhapsody system as a dedicated server during development,
and Brian will be using an Alpha-NT system for a lot of testing, which should
give us pretty good coverage of the portability issues.

I’m supposed to go out and have a bunch of meetings at apple next month to cover
games, graphics, and hardware. Various parts of apple have scheduled
meetings with me on three seperate occasions over the past couple years, but they
have always been canceled for one reason or another (they laid off the people
I was going to meet with once…).

I have said some negative things about MacOs before, but my knowledge of
the mac is five years old. There was certainly the possibility that things
had improved since then, so I spent some time browsing mac documentation
recently. I was pretty amused. A stack sniffer. Patching trap vectors.
Cooperative multitasking. Application memory partitions. Heh.

I’m scared of MacOS X. As far as I can tell, The basic plan is to take rhapsody
and bolt all the MacOS APIs into the kernel. I understand that that may well
be a sensible biz direction, but I fear it.

In other operating system news, Be has glquake running hardware accelerated on
their upcoming OpenGL driver architecture. I gave them access to the glquake and
quake 2 codebases for development purposes, and I expect we will work out an
agreement for distribution of the ports.

Any X server vendors working on hardware accelerated OpenGL should get in touch
with Zoid about interfacing and tuning with the Id OpenGL games on linux.


Flag Movement Styles

Filed under: — johnc @ 4:47 pm

I am not opposed to adding a flag to control the movement styles. I was
rather expecting it to be made optional in 3.17, but I haven’t been directly
involved in the last few releases.

The way this played out in public is a bit unfortunate. Everyone at Id is
busy full time with the new product, so we just weren’t paying enough attention
to the Quake 2 modifications. Some people managed to read into my last update
that we were blaming Zoid for things. Uh, no. I think he was acting within
his charter (catering to the community) very well, it just interfered with an
aspect of the game that shouldn’t have been modified. We just never made it
explicitly clear that it shouldn’t have been modified.

It is a bit amusing how after the QuakeArena anouncement, I got flamed by
lots of people for abandoning single player play (even though we aren’t, really)
but after I say that Quake 2 can’t forget that it is a single player game, I get
flamed by a different set of people who think it is stupid to care about single
player anymore when all “everyone” plays is multiplayer. The joy of having a
wide audience that knows your email address.


Movement Physics

Filed under: — johnc @ 4:40 pm

Here is the real story on the movement physics changes.

Zoid changed the movement code in a way that he felt improved gameplay in the
3.15 release.

We don’t directly supervise most of the work Zoid does. One of the main
reasons we work with him is that I respect his judgment, and I feel that his
work benefits the community quite a bit with almost no effort on my part. If
I had to code review every change he made, it wouldn’t be worth the effort.

Zoid has “ownership” of the Quake, Glquake, and QuakeWorld codebases. We don’t
intend to do any more modifications at Id on those sources, so he has pretty
free rein within his discretion.

We passed over the Quake 2 codebase to him for the addition of new features
like auto download, but it might have been a bit premature, because official
mission packs were still in development, and unlike glquake and quakeworld,
Q2 is a product that must remain official and supported, so the scope of his
freedoms should have been spelled out a little more clearly.

The air movement code wasn’t a good thing to change in Quake 2, because the
codebase still had to support all the commercial single player levels, and
subtle physics changes can have lots of unintended effects.

QuakeWorld didn’t support single player maps, so it was a fine place to
experiment with physics changes.

QuakeArena is starting with fresh new data, so it is also a good place to
experiment with physics changes.

Quake 2 cannot be allowed to evolve in a way that detracts from the commercial
single player levels.

The old style movement should not be refered to as “real world physics". None
of the quake physics are remotely close to real world physics, so I don’t think
one way is significantly more “real” than the other. In Q2, you accelerate from
0 to 27 mph in 1/30 of a second, which just as unrealistic as being able to
accelerate in midair…


Birth of Quake 3 Arena

Filed under: — johnc @ 5:16 pm

My last two .plan updates have described efforts that were not in our original
plan for quake 3, which was “quake 2 game and network technology with a new
graphics engine".

We changed our minds.

The new product is going to be called “Quake Arena", and will consist
exclusively of deathmatch style gaming (including CTF and other derivatives).
The single player game will just be a progression through a ranking ladder
against bot AIs. We think that can still be made an enjoyable game, but
it is definately a gamble.

In the past, we have always been designing two games at once, the single
player game and the multi player game, and they often had conflicting goals.
For instance, the client-server communications channel discouraged massive
quantities of moving entities that would have been interesting in single
player, while the maps and weapons designed for single player were not ideal
for multiplayer. The largest conflict was just raw development time. Time
spent on monsters is time not spent on player movement. Time spent on unit
goals is time not spent on game rules.

There are many wonderful gaming experiences in single player FPS, but we are
choosing to leave them behind to give us a purity of focus that will let us
make significant advances in the multiplayer experience.

The emphasis will be on making every aspect as robust and high quality as
possible, rather than trying to add every conceivable option anyone could
want. We will not be trying to take the place of every mod ever produced, but
we hope to satisfy a large part of the network gaming audience with the out of
box experience.

There is a definite effect on graphics technology decisions. Much of the
positive feedback in a single player FPS is the presentation of rich visual
scenes, which are often at the expense of framerate. A multiplayer level
still needs to make a good first impression, but after you have seen it a
hundred times, the speed of the game is more important. This means that there
are many aggressive graphics technologies that I will not pursue because they
are not apropriate to the type of game we are creating.

The graphics engine will still be OpenGL only, with significant new features
not seen anywhere before, but it will also have fallback modes to render at
roughly Quake-2 quality and speed.

The Client as a Dumb Terminal

Filed under: — johnc @ 12:59 am

I am giving up on one of my cherished networking concepts – the client as a
dumb terminal.

With sub 200 msec network connections of reasonable bandwidth, pure
interpolating is a reasonable solution that is very robust and elegant.

With modem based internet connections having 300+ msec pings and patchy
delivery quality, pure interpolation just doesn’t give a good enough game
play experience.

Quake 1 had all entities in the world strictly interpolated (with a 20 hz
default clock), but had several aspects of the game hard coded on the client,
like the view bobbing, damage flashes, and status bar.

QuakeWorld was my experimentation with adding a lot of specialized logic to
improve network play. An advantage I had was that the gameplay was all done,
so I didn’t mind adding quite hardcoded things to improve nailguns and
shotguns, among other things. The largest change was adding client side
movement prediction, which basically threw out the notion of the general
purpose client.

Quake 2 was intended to be more general and flexible than Q1/QW, with almost
everything completely configurable by the server. At the time of q2test, it
was (with a fixed 10 hz clock).

Before shipping, I wound up integrated client side movement prediction like
QW. Having gone that far, I really should have moved the simulation of a lot
of the other view presentation logic (head bobs / kicks, etc) back to the
client. Because these are still run on the server, a lagged connection will
give you odd effects like falling off a cliff, running away, then having the
head kick and the landing crunch happen when you are 50 feet away from the
point of impact.

So basically I wound up losing the elegance, but I didn’t reap all the
benefits I could have.

I am still holding to my stronger networking belief, though – centralized,
authoritative servers, as opposed to peer to peer interaction. I REALLY
think distributed simulation among clients is a VERY BAD idea for networked
games. Yes, there are some theoretical advantages to being able to hand off
the simulation of some objects, but I have plenty of reasons to not want to
do it. Client side movement prediction is simulation, but it has no bearing
on the server, it is strictly to give a better presentation of the data the
server has provided.

The new paradigm is that the server controls all information necessary for
the rules of the game to function, but the client controls all presentation
of that information to the user through models, audio, and motion.

There were moves in that direction visible in Quake 2 – the temp entities
and entity events that combined sounds and model animations run entirely on
the client side. Everything will soon be like this.

This saves some degree of network bandwidth, because instead of specifying
the model, skin, frame, etc, we can just say what type of entity it is, and
the client can often determine everything else by itself. This also enables
more aggressive multi-part entities that would have been unreasonable to do
if they each had to be sent seperately over the network connection.

Almost all cycling animations can be smoother. If the client knows that the
character is going through his death animation for instance, it can advance
the frames itself, rather than having the server tell it when every single
frame changes.

Motion of projectiles can be predicted accurately on the client side.

All aspects of your own characters movement and presentation should be
completely smooth until acted upon (shot) by an outside entity.

All the clever coding in the world can’t teleport bits from other computers,
so lag doesn’t go away, but most of the other hitchy effects of network play
can be resolved. Lag reduction is really a seperate topic. QuakeWorld had
instant response packets, because it was designed for a dedicated server
only, which helped quite a bit.

During the projects development, the client side code will be in a C DLL, but
I intend to Do The Right Thing and make it java based before shipping. I
absolutely refuse to download binary code to a client.


Limits of Input under Windows

Filed under: — johnc @ 12:46 am

I spent quite a while investigating the limits of input under windows
recently. I foudn out a few interesting things:

Mouse sampling on win95 only happens every 25ms. It doesn’t matter if you
check the cursor or use DirectInput, the values will only change 40 times
a second.

This means that with normal checking, the mouse control will feel slightly
stuttery whenever the framerate is over 20 fps, because on some frames you
will be getting one input sample, and on other frames you will be getting
two. The difference between two samples and three isn’t very noticable, so
it isn’t much of an issue below 20 fps. Above 40 fps it is a HUGE issue,
because the frames will be bobbing between one sample and zero samples.

I knew there were some sampling quantization issues early on, so I added
the “m_filter 1″ variable, but it really wasn’t an optimal solution. It
averaged together the samples collected at the last two frames, which
worked out ok if the framerate stayed consistantly high and you were only
averaging together one to three samples, but when the framerate dropped to
10 fps or so, you wound up averaging together a dozen more samples than
were really needed, giving the “rubber stick” feel to the mouse control.

I now have three modes of mouse control:

in_mouse 1:
Mouse control with standard win-32 cursor calls, just like Quake 2.

in_mouse 2:
Mouse control using DirectInput to sample the mouse reletive counters
each frame. This behaves like winquake with -dinput. There isn’t a lot
of difference between this and 1, but you get a little more precision, and
you never run into window clamping issues. If at some point in the future
microsoft changes the implementation of DirectInput so that it processes
all pending mouse events exactly when the getState call happens, this will
be the ideal input mode.

in_mouse 3:
Processes DirectInput mouse movement events, and filters the amount of
movement over the next 25 milliseconds. This effectively adds about 12 ms
of latency to the mouse control, but the movement is smooth and consistant
at any variable frame rate. This will be the default for Quake 3, but some
people may want the 12ms faster (but rougher) response time of mode 2.

It takes a pretty intense player to even notice the difference in most
cases, but if you have a setup that can run a very consistant 30 fps you
will probably apreciate the smoothness. At 60 fps, anyone can tell the
difference, but rendering speeds will tend to cause a fair amount of
jitter at those rates no matter what the mouse is doing.

DirectInput on WindowsNT does not log mouse events as they happen, but
seems to just do a poll when called, so they can’t be filtered properly.

Keyboard sampling appears to be millisecond precise on both OS, though.

In doing this testing, it has become a little bit more tempting to try to
put in more leveling optimizations to allow 60 hz framerates on the highest
end hardware, but I have always shied away from targeting very high
framerates as a goal, because when you miss by a tiny little bit, the drop
from 60 to 30 ( 1 to 2 vertical retraces ) fps is extremely noticable.

I have also concluded that the networking architecture for Quake 2 was
just not the right thing. The interpolating 10 hz server made a lot of
animation easier, which fit with the single player focus, but it just
wasn’t a good thing for internet play.

Quake 3 will have an all new entity communication mechanism that should
be solidly better than any previous system. I have some new ideas that go
well beyond the previous work that I did on QuakeWorld.

Its tempting to try to roll the new changes back into Quake 2, but a lot
of them are pretty fundamental, and I’m sure we would bust a lot of
important single player stuff while gutting the network code.

(Yes, we made some direction changes in Quake 3 since the original
announcement when it was to be based on the Quake 2 game and networking
with just a new graphics engine)



Filed under: — johnc @ 2:29 pm

Congratulations to Epic, Unreal looks very good.


Drag Strip Day

Filed under: — johnc @ 11:01 pm

A 94 degree day at the dragstrip today. Several 3drealms and Norwood
Autocraft folk also showed up to run. We got to weigh most of the cars on
the track scales, which gives us a few more data points.

11.6 @ 125 Bob Norwood’s ferrari P4 race car (2200 lbs)
11.9 @ 139 John Carmack’s twin turbo testarossa (3815 lbs)
11.9 @ 117 Paul Steed’s YZF600R bike
12.1 @ 122 John Carmack’s F50 (3205 lbs)
12.3 @ 117 Brian’s Viper GTS (3560 lbs)
13.7 @ 103 John Cash’s supercharged M3
14.0 @ 96 Scott Miller’s lexus GS400
15.0 @ ??? Someone’s volkswagon GTI
15.1 @ ??? Christian’s boxter (with Tim driving)

Weight is the key for good ETs. The TR has considerably better power
to weight ratio than the P4, but it can’t effectively use most of the power
until it gets into third gear. The viper is actually making more power than
the F50, (Brian got a big kick out of that after his dyno run) but 350 lbs
more than compensated for it.

I wanted to hit 140 in the TR, but the clutch started slipping on the last
run and I called it a day.

I was actually surprised the F50 ran 122 mph, which is the same the F40 did
on a 25 degree cooler day. I was running with the top off, so it might
even be capable of going a bit faster with it on.

The F50 and the viper were both very consistant performers, but the TR and
the supercharged M3 were all over the place with their runs.

Brian nocked over a tenth off of his times even in spite of the heat, due to
launch practice and some inlet modifications. He also power shifted on
his best run.

It was pretty funny watching the little volkswagon consistantly beat up on
a tire shredding trans-am.

George Broussard had his newly hopped up 911 turbo, but it broke the trans
on its very first run. We were expecting him to be in the 11’s.

We probably won’t run again until either I get the F50 souped up, or my
GTO gets finished.


Bad Programming in Quake

Filed under: — johnc @ 9:18 pm

Here is an example of some bad programming in quake:

There are three places where text input is handled in the game: the console,
the chat line, and the menu fields. They all used completely different code
to manage the input line and display the output. Some allowed pasting from
the system clipboard, some allowed scrolling, some accepted unix control
character commands, etc. A big mess.

Quake 3 will finally have full support for international keyboards and
character sets. This turned out to be a bit more trouble than expected
because of the way Quake treated keys and characters, and it led to a
rewrite of a lot of the keyboard handling, including the full cleanup and
improvement of text fields.

A similar cleanup of the text printing hapened when Cash implemented general
colored text: we had at least a half dozen different little loops to print
strings with slightly different attributes, but now we have a generalized one
that handles embedded color commands or force-to-color printing.

Amidst all the high end graphics work, sometimes it is nice to just fix up
something elementary.


New Technologies for Quake3/Trinity

Filed under: — johnc @ 3:10 am

Here are some notes on a few of the technologies that I researched in
preparing for the Quake3/trinity engine. I got a couple months of pretty
much wide open research done at the start, but it turned out that none of
the early research actually had any bearing on the directions I finally
decided on. Ah well, I learned a lot, and it will probably pay off at
some later time.

I spent a little while doing some basic research with lummigraphs, which
are sort of a digital hologram. The space requirements are IMMENSE, on
the order of several gigs uncompressed for even a single full sized room.
I was considering the possibility of using very small lumigraph fragments
(I called them “lumigraphlets") as imposters for large clusters of areas,
similar to aproximating an area with a texture map, but it would effectively
be a view dependent texture.

The results were interesting, but transitioning seamlessly would be difficult,
the memory was still large, and it has all the same caching issues that any
impostor scheme has.

Another aproach I worked on was basically extending the sky box code style of
rendering from quake 2 into a complete rendering system. Take a large number
of environment map snapshots, and render a view by interpolating between up
to four maps (if in a tetrahedral arangement) based on the view position.

A simple image based interpolating doesn’t convey a sense of motion, because
it basically just ghosts between seperate points unless the maps are VERY
close together reletive to the nearest point visible in the images.

If the images that make up the environment map cube also contain depth values
at some (generally lower) resolution, instead of rendering the environment
map as six big flat squares at infinity, you can render it as a lot of little
triangles at the proper world coordinates for the individual texture points.
A single environment map like this can be walked around in and gives a sense
of motion. If you have multiple maps from nearby locations, they can be
easily blended together. Some effort should be made to nudge the mesh
samples so that as many points are common between the maps as possible, but
even a regular grid works ok.

You get texture smearing when occluded detail should be revealed, and if you
move too far from the original camera point the textures blur out a lot, but
it is still a very good effect, is completely complexity insensitive, and is
aliasing free except when the view position causes a silhouette crease in
the depth data.

Even with low res environment maps like in Quake2, each snapshot would consume
700k, so taking several hundred environment images throughout a level would
generate too much data. Obviously there is a great deal of redundancy – you
will have several environment maps that contain the same wall image, for
instance. I had an interesting idea for compressing it all. If you ignore
specular lighting and atmospheric effects, any surface that is visible in
multiple environment maps can be represented by a single copy of it and
perspective transformation of that image. Single image, transformations,
sounds like… fractal compression. Normal fractal compression only deals
with affine maps, but the extension to projective maps seems logical.

I think that a certain type of game could be done with a technology like that,
but in the end, I didn’t think it was the right direction for a first person

There is a tie in between lummigraphs, multiple environment maps, specularity,
convolution, and dynamic indirect lighting. Its nagging at me, but it hasn’t
come completely clear.

Other topics for when I get the time to write more:

Micro environment map based model lighting. Convolutions of environment maps
by phong exponent, exponent of one with normal vector is diffuse lighting.

Full surface texture representation. Interior antaliasing with edge
matched texels.

Octree represented surface voxels. Drawing and tracing.

Bump mapping, and why most of the aproaches being suggested for hardware
are bogus.

Parametric patches vs implicit functions vs subdivision surfaces.

Why all analytical boundary representations basically suck.

Finite element radiosity vs photon tracing.



Rcon Backdoor

Filed under: — johnc @ 6:30 pm

The rcon backdoor was added to help the development of QuakeWorld (It is
not present in Quake 1). At the time, attacking Quake servers with
spoofed packets was not the popular sport it seems to have become with
Quake 2, so I didn’t think much about the potential for exploitation.

The many forced releases of Quake 2 due to hacker attacks has certainly
taught me to be a lot more wary.

It was a convenient feature for us, but it turned out to be irresponsible.

There will be new releases of QuakeWorld and Quake 2 soon.


F50 vs. F40

Filed under: — johnc @ 4:00 pm

F50 pros and cons vs F40:

The front and rear views are definately cooler on the F50, but I think I
like the F40 side view better. I haven’t taken the top off the F50 yet,
though (its supposed to be a 40 minute job…).

Adjustable front suspension. Press a button and it raises two inches,
which means you can actually drive it up into strip malls. The F40 had
to be driven into my garage at an angle to keep the front from rubbing.
This makes the car actually fairly practical for daily driving.

Drastically better off idle torque. You have to rev the F40 a fair amount
to even get it moving, and if you are moving at 2000 rpm in first gear,
a honda can pull away from you until it starts making boost at 3500 rpm.
The f50 has enough torque that you don’t even need to rev to get moving,
and it goes quite well by just flooring it after you are moving. No need
to wreck a clutch by slipping it out from 4000 rpm.

Much nicer clutch. The F40 clutch was a very low-tech single disk clutch
that required more effort than on my crazy TR with over twice the torque.

Better rearward visibility. The F40’s lexan fastback made everything to
your rear a blur.

Better shifting. A much smoother six speed than the F40’s five speed.

Better suspension. Some bumps that would upset the F40 badly are handled
without any problems.

Better aerodynamics. A flat underbody with tunnels is a good thing if you
are going to be moving at very high speeds.

I beleive the F50 could probably lap a road coarse faster than the F40, but
in a straight line, the F40 is faster. The F50 felt a fair amount slower,
but I was chalking that up to the lack of non-linear turbo rush. Today I
drove it down to the dyno and we got real numbers.

It only made 385 hp at the rear wheels, which is maybe 450 at the crank if
you are being generous. The F40 made 415, but that was with the boost
cranked up a bit over stock.

We’re going to have to do something about that.

I’m thinking that a mild twin-turbo job will do the trick. Six pounds of
boost should get it up to a health 500 hp at the rear wheels, which will
keep me happy. I don’t want to turn it into a science project like my
TR, I just want to make sure it is well out of the range of any normal

I may put that in line after my GTO gets finished.


Bought an F50

Filed under: — johnc @ 10:04 pm

Yes, I bought an F50. No, I don’t want a McLaren.

We will be going back to the dragstrip in a couple weeks, and I will be
exercising both the F50 and the TR there. Cash’s supercharged M3 will
probably show some of the porsches a thing or two, as well.

I’ll probably rent a road coarse sometime soon, but I’m not in too much of
a hurry to run the F50 into the weeds.

My TR finally got put back together after a terrific nitrous explosion
just before the last dragstrip. It now makes 1000.0 hp at the rear wheels.
Contrast that with the 415 rear wheel hp that the F40 made. Of cource, a
loaded testarossa does weigh about 4000 lbs…

My project car is somewhat nearing completion. My mechanic says it will
be running in six weeks, but mechanics can be even more optimistic than
software developers. :) I’m betting on fall. It should really be something
when completed: a carbon fiber bodied ferrari GTO with a custom, one-of-a
kind billet alluminum 4 valve DOHC 5.2L V12 with twin turbos running
around 30 lbs of boost. It should be good for quite a bit more hp than my
TR, and the entire car will only weigh 2400 lbs.


The distance between a cool demo and production code is vast. Two months
ago, I had some functional demos of several pieces of the Quake 3 rendering
tech, but today it still isn’t usable as a full replacement for ref_gl yet.

Writing a modern game engine is a lot of work.

The new architecture is turning out very elegent. Not having to support
software rendering or color index images is helping a lot, but it is also
nice to reflect on just how much I have learned in the couple years since
the original Quake renderer was written.

My C coding style has changed for Quake 3, which is going to give me a nice
way of telling at a glance which code I have or haven’t touched since
Quake 2. In fact, there have been enough evolutions in my style that you
can usually tell what year I wrote a piece of code by just looking at
a single function:

= Function headers like this are DOOM or earlier

Function Headers like this are Quake or later

// comments not indented were written on NEXTSTEP
// (quake 1)

// indented comments were written on
// Visual C++ (glquake / quakeworld, quake2)

for (testnum=0 ; testnum<4 ; testnum++)
{ // older coding style

for (testNumber = 0 ; testNumber < 4 ; testNumber++) {
// quake 3 coding style


What’s in an F50

Filed under: — johnc @ 7:14 pm

F40 + $465,000 = F50


Quake 3 Engine

Filed under: — johnc @ 11:28 pm

Things are progressing reasonably well on the Quake 3 engine.

Not being limited to supporting a 320*240 256 color screen is
very, very nice, and will make everyone’s lives a lot easier.

All of our new source artwork is being done in 24 bit TGA files,
but the engine will continue to load .wal files and .pcx files
for developer’s convenience. Each pcx can have its own palette
now though, because it is just converted to 24 bit at load time.

Q3 is going to have a fixed virtual screen coordinate system,
independant of resolution. I tried that back in the original
glquake, but the fixed coordinate system was only 320*200, which
was excessively low. Q2 went with a dynamic layout at different
resolutions, which was a pain, and won’t scale to the high resolutions
that very fast cards will be capable of running at next year.

All screen drawing is now done assuming the screen is 640*480, and
everything is just scaled as you go higher or lower. This makes
laying out status bars and HUDs a ton easier, and will let us
do a lot cooler looking screens.

There will be an interface to let game dlls draw whatever they want
on the screen, precisely where they want it. You can suck up a lot
of network bandwidth doing that though, so some care will be needed.

Going to the completely opposite end of the hardware spectrum from
quake 3…

I have been very pleased with the fallout from the release of the
DOOM source code.

At any given spot in design space, there are different paths you
can take to move forward. I have usually chosen to try to make a
large step to a completely new area, but the temptation is there
to just clean up and improve in the same area, continuously
polishing the same program.

I am enjoying seeing several groups pouring over DOOM, fixing it
up and enhancing it. Cleaning up long standing bugs. Removing
internal limitations. Orthogonalizing feature sets. Etc.

The two that I have been following closest are Team TNT’s BOOM
engine project, which is a clear headed, well engineered
improvement on the basic DOOM technical decisions, and Bruce Lewis’
glDoom project.

Any quakers feeling nostalgic should browse around:


Drag Strip Again

Filed under: — johnc @ 6:16 pm

Drag strip day! Most of the id guys, John Romero from ION,
and George and Alan from 3drealms headed to the Ennis
dragstrip today.

Nobody broke down, and some good times were posted.

11.9 @ 122 John Carmack F40
12.2 @ 122 George Broussard custom turbo 911
12.4 @ 116 Brian Hook Viper GTS
13.4 @ 106 John Romero custom turbo testarossa
13.6 @ 106 Todd Hollenshead ‘vette
13.9 @ 100 Paul Steed 911
14.0 @ 99 Tim Willits 911
14.3 @ 101 Bear Turbo Supra
14.4 @ 98 Alan Blum turbo rx-7
14.7 @ 92 Brandon James M3
15.3 @ 92 Christian Boxster
15.5 @ 93 Jen (Hook’s Chick) Turbo Volvo
16.1 @ 89 Ms. Donna Mustang GT
17.4 @ 82 Anna (Carmack’s Chick) Honda Accord
18.1 @ 75 Jennifer (Jim Molinets’ Chick) Saturn

We had three significant no-shows for various reasons: my TR,
Adrian’s viper, and Cash’s supercharged M3 were all in the shop.


Quake 3 Suggestions

Filed under: — johnc @ 12:29 pm

I haven’t even seen the “BeOS port of Quake". Stop emailing me about
aproving it. I told one of the Lion developers he could port it to
BeOS in his spare time, but I haven’t seen any results from it.

There is a public discussion / compilation going on at openquake for
suggestions to improve technical aspects of quake 3:

This is sooo much better than just dropping me an email when a thought
hits you. There are many, many thousands of you out there, and there
needs to be some filtering process so we can get the information

We will read and evaluate everything that makes it through the
discussion process. There are two possible reasons why features
don’t make it into our games – either we decide that the effort is
better spent elsewhere, or we just don’t think about it. Sometimes the
great ideas are completely obvious when suggested, but were just missed.
That is what I most hope to see.

When the suggestions involve engineering tradeoffs and we have to
consider the implementation effort of a feature vs its benefits, the
best way to convince us to pursue it is to specify EXACTLY what benefits
would be gained by undertaking the work, and specifying a clean interface
to the feature from the file system data and the gamex86 code.

We hack where necessary, but I am much more willing to spend my time on
an elegant extension that has multiple uses, rather than adding api bulk
for specific features. Point out things that are clunky and inelegant
in the current implementation. Even if it doesn’t make any user visible
difference, restructuring api for cleanliness is still a worthwhile goal.

We have our own ideas about game play features, so we may just disagree
with you. Even if you-and-all-your-friends are SURE that your
suggestions will make the game a ton better, we may not think it
fits with our overall direction. We aren’t going to be all things to
all people, and we don’t design by committee.



Filed under: — johnc @ 2:10 am

I haven’t given up on rhapsody yet. I will certainly be experimenting
with the release version when it ships, but I have had a number of
discouraging things happen. Twice I was going to go do meetings at
apple with all relevent people, but the people setting it up would
get laid off before the meetings happened. Several times I would hear
encouraging rumors about various things, but they never panned out.
We had some biz discussions with apple about rhapsody, but they were
so incredibly cautious about targeting rhapsody for consumer apps at
the expense of macos that I doubted their resolve.

I WANT to help. Maybe post-E3 we can put something together.

The SGI/microsoft deal fucked up a lot of the 3D options. The codebase
that everyone was using to develop OpenGL ICDs is now owned by
microsoft, so it is unlikely any of them will ever be allowed to port
to rhapsody (or linux, or BeOS).

That is one of the things I stress over – The Right Thing is clear,
but its not going to happen because of biz moves. It would be
great if ATI, which has video drivers for win, rhapsody, linux, and
BeOS, could run the same ICD on all those platforms.


The Winners is …

Filed under: — johnc @ 3:45 pm

All gone!

Paul Magyar gets the last (slightly broken) one.

Bob Farmer gets the third.

Philip Kizer gets the second one.

Kyle Bousquet gets the first one. Remember: you have to be able to
physically come to our offices, and be capable of doing an OS install.
Shipping it is not an option.


Filed under: — johnc @ 2:16 pm

I just shut down the last of the NEXTSTEP systems running at id.

We hadn’t really used them for much of anything in the past year, so it was
just easier to turn them off than to continue to administer them.

Most of the intel systems had already been converted to NT or 95, and
Onethumb gets all of our old black NeXT hardware, but we have four nice
HP 712/80 workstations that can’t be used for much of anything.

If someone can put these systems to good use (a dallas area unix hacker),
you can have them for free. As soon as they are spoken for, I will update
my .plan, so check immediately before sending me email.

You have to come by our office (in Mesquite) and do a fresh OS install here
before you can take one. There may still be junk on the HD, and I can’t
spend the time to clean them myself. You can run either NEXTSTEP 3.3 or
HP/UX. These are NOT intel machines, so you can’t run dos or windows.
I have NS CD’s here, but I can’t find the original HP/UX CDs. Bring your
own if that’s what you want.

I’m a bit nostalgic about the NeXT systems – the story in the Id Anthology
is absolutely true: I walked through a mile of snow to the bank to pay for
our first workstation. For several years, I considered it the best
development environment around. It still has advantages today, but you
can’t do any accelerated 3D work on it.

I had high hopes for rhapsody, but even on a top of the line PPC, it felt
painfully sluggish compared to the NT workstations I use normally, and
apple doesn’t have their 3D act together at all.

Its kind of funny, but even through all the D3D/OpenGL animosity, I think
Windows NT is the best place to do 3D graphics development.


Robert Duffy in Charge of the Editor Codebase

Filed under: — johnc @ 4:04 pm

Robert Duffy, the maintainer of Radiant QE4 is now “officially” in charge of
further development of the editor codebase. He joins Zoid as a (part time)
contractor for us.

A modified version of Radiant will be the level editor for Quake 3. The
primary changes will be support for curved surfaces and more general surface
shaders. All changes will be publicly released, either after Q3 ships or
possibly at the release of Q3Test, depending on how things are going.

The other major effort is to get Radiant working properly on all of the 3D
cards that are fielding full OpenGL ICDs. If you want to do level
development, you should probably get an 8mb video card. Permedia II cards
have been the mainstay for developers that can’t afford intergraph systems,
but 8mb rendition v2200 (thriller 3D) cards are probably a better bet as
soon as their ICD gets all the bugs worked out.


New Plan

Filed under: — johnc @ 8:07 pm

The Old Plan:

The rest of the team works on an aggressive Quake 2 expansion pack while
Brian and I develop tools and code for the entirely new Trinity generation
project to begin after the mission pack ships.

The New Plan:

Expand the mission pack into a complete game, and merge together a completely
new graphics engine with the quake 2 game / client / server framework, giving
us Quake 3.

“Trinity” is basically being broken up into two phases: graphics and
everything else. Towards the end of Quake 1’s development I was thinking
that we might have been better off splitting quake on those categories, but
in reverse order. Doing client/server, the better modification framework,
and qc, coupled with a spiced up DOOM engine (Duke style) for one game, then
doing the full 3D renderer for the following game.

We have no reason to believe that the next generation development would
somehow go faster than the previous, so there is a real chance that doing all
of the Trinity technology at once might push game development time to a full
two years for us, which might be a bit more than the pressure-cooker work
atmosphere here could handle.

So, we are going to try an experiment.

The non-graphics things that I was planning for Trinity will be held off
until the following project – much java integration with client downloadable
code being one of the more significant aspects. I hope to get to some next
generation sound work, but the graphics engine is the only thing I am
committing to.

The graphics engine is going to be hardware accelerated ONLY. NO SOFTWARE
RENDERER, and it won’t work very well on a lot of current hardware. We
understand fully that this is going to significantly cut into our potential
customer base, but everyone was tired of working under the constraints of the
software renderer. There are still going to be plenty of good quake derived
games to play from other developers for people without appropriate hardware.

There are some specific things that the graphics technology is leveraging that
may influence your choice of a 3D accelerator.

All source artwork is being created and delivered in 24 bit color. An
accelerator that can perform all 3D work in 24 bit color will look
substantially better than a 16 bit card. You will pay a speed cost for it,

Most of the textures are going to be higher resolution. Larger amounts of
texture memory will make a bigger difference than it does on Quake 2.

Some key rendering effects require blending modes that some cards don’t

The fill rate requirements will be about 50% more than Quake 2, on average.
Cards that are fill rate limited will slow down unless you go to a lower

The triangle rate requirements will be at least double Quake 2, and scalable
to much higher levels of detail on appropriate hardware.

Here are my current GUESSES about how existing cards will perform.

Voodoo 1
Performance will be a little slow, but it should look good and run acceptably.
You will have to use somewhat condensed textures to avoid texture thrashing.

Voodoo 2
Should run great. Getting the 12 mb board is probably a good idea if you want
to use the high resolution textures. The main rendering mode won’t be able to
take advantage of the dual TMU the same way quake 2 does, so the extra TMU
will be used for slightly higher quality rendering modes instead of greater
speed: trilinear / detail texturing, or some full color effects where others
get a mono channel.

Permedia 2
Will be completely fill rate bound, so it will basically run 2/3 the speed
that quake 2 does. Not very fast. It also doesn’t have one of the needed
blending modes, so it won’t look very good, either. P2 does support 24 bit
rendering, but it won’t be fast enough to use it.

ATI Rage Pro
It looks like the rage pro has all the required blending modes, but the jury
is still out on the performance.

Intel I740
Should run good with all features, and because all of the textures come out
of AGP memory, there will be no texture thrashing at all, even with the full
resolution textures.

Rendition 2100/2200
The 2100 should run about the speed of a voodoo 1, and the 2200 should be
faster. They support all the necessary features, and an 8 mb 2200 should be
able to use the high res textures without a problem. The renditions are the
only current boards that can do 24 bit rendering with all the features. It
will be a bit slow in 24 bit mode, but it will look the best.

Probably won’t run Quake 3. They don’t have ANY of the necessary blending
modes, so it can’t look correct. Video Logic might decide to rev their
minidriver to try to support it, but it is probably futile.

RIVA 128
Riva puts us in a bad position. They are very fast, but they don’t support
an important feature. We can crutch it up by performing some extra drawing
passes, but there is a bit of a quality loss, and it will impact their speed.
They will probably be a bit faster than voodoo 1, but not to the degree that
they are in Quake 2.

Naturally, the best cards are yet to come (I won’t comment on unreleased
cards). The graphics engine is being designed to be scalable over the next
few YEARS, so it might look like we are shooting a bit high for the first
release, but by the time it actually ships, there will be a lot of people
with brand new accelerators that won’t be properly exploited by any other


American McGee

Filed under: — johnc @ 11:46 pm

American McGee has been let go from Id.

His past contributions include work in three of the all time great
games (DOOM 2, Quake, Quake 2), but we were not seeing what we wanted.


Bug Reports on the 3.12 Release

Filed under: — johnc @ 11:46 am

Don’t send any bug reports on the 3.12 release to me, I just forward them
over to jcash. He is going to be managing all future work on the Quake 2
codebase through the mission packs. I’m working on trinity.

3.12 answered the release question pretty decisively for me. We were in
code freeze for over two weeks while the release was being professionally
beta tested, and all it seemed to get us was a two week later release.

Future releases are going to be of the fast/multiple release type, but
clearly labeled as a “beta” release until it stabilizes. A dozen
professional testers or fifty amature testers just can’t compare to the
thousands of players who will download a beta on the first day.

I have spent a while thinking about the causes of the patches for Q2.
Our original plan was to just have the contents of 3.12 as the first
patch, but have it out a month earlier than we did.

The first several patches were forced due to security weaknesses. Lesson
learned – we need to design more security conscious to try to protect
against the assholes out there.

The cause for the upcoming 3.13 patch is the same thing that has caused us
a fair amount of trouble through Q2’s development – instability in the
gamex86 code due to its decending from QC code in Q1. It turns out that
there were lots of bugs in the original QC code, but because of its safe
interpreted nature (specifically having a null entity reference the world)
they never really bothered anyone. We basically just ported the QC code
to regular C for Q2 (it shows in the code) and fixed crash bugs as they
popped up. We should have taken the time to redesign more for C’s
strengths and weaknesses.


Wired Article

Filed under: — johnc @ 3:47 am

I just read the Wired article about all the Doom spawn.

I was quoted as saying “like I’m supposed to be scared of Monolith", which
is much more derogatory sounding than I would like.

I haven’t followed Monolith’s development, and I don’t know any of their
technical credentials, so I am not in any position to evaluate them.

The topic of “is microsoft going to crush you now that they are in the
game biz", made me a bit sarcastic.

I honestly wish the best to everyone pursuing new engine development.


Voodoo 2

Filed under: — johnc @ 2:29 pm

8 mb or 12 mb voodoo 2?

An 8mb v2 has 2 mb of texture memory on each TMU. That is not as general
as the current 6mb v1 cards that have 4 mb of texture memory on a single
TMU. To use the multitexture capability, textures are restricted to
being on one or the other TMU (simplifying a bit here). There is some
benefit over only having 2 mb of memory, but it isn’t double. You will
see more texture swapping in quake on an 8mb voodoo 2 than you would
on a 6mb voodoo 1. However, the texture swapping is several times faster,
so it isn’t necessarily all that bad.

If you use the 8 bit palettized textures, there will probably not be any
noticable speed improvement with a 12 mb voodoo 2 vs an 8 mb one. The
situation that would most stress it would be an active deathmatch that
had players using every skin. You might see a difference there.

A game that uses multitexture and 16 bit textures for everything
will stress a 4/2/2 voodoo layout. Several of the Quake engine licensees
are using full 16 bit textures, and should perform better on a 4/4/4 card.

The differences probably won’t show as significant on timedemo numbers,
but they will be felt as little one frame hitches here and there.


The State of 3D Cards

Filed under: — johnc @ 2:59 pm

I have been getting a lot of mail with questions about the intel i740
today, so here is a general update on the state of 3D cards as they relate
to quake engine games.

ATI rage pro
On paper, this chip looks like it should run almost decently – about the
performance of a permedia II, but with per-pixel mip mapping and colored
lighting. With the currently shipping MCD GL driver on NT, it just doesn’t
run well at all. The performance is well below acceptable, and there are
some strange mip map selection errors. We have been hearing for quite some
time that ATI is working on an OpenGL ICD for both ‘95 and NT, but we
haven’t seen it yet. The rage pro supposedly has multitexture capability,
which would help out quite a bit if they implement the multitexture
extension. If they do a very good driver, the rage pro may get up to the
performance of the rendition cards. Supports up to 16MB, which would make
it good for development work if the rest of it was up to par.

3DLabs permedia II
Good throughput, poor fillrate, fair quality, fair features.

No colored lighting blend mode, currently no mip mapping at all.

Supports up to 8MB.

The only currently shipping production full ICD for ‘95, but a little

If 3dlabs implemented per-polygon mip mapping, they would get both a
quality and a slight fillrate boost.

Drivers available for WinNT on the DEC Alpha (but the alpha drivers are
very flaky).

Power VR PCX2
Poor throughput, good fillrate, fair quality, poor features, low price.

No WinNT support.

Almost no blend modes at all, low alpha precision.

Even though the hardware doesn’t support multitexture, they could implement
the multi-texture extension just to save on polygon setup costs. That
might get them a 10% to 15% performance boost.

They could implement the point parameters extension for a significant boost
in the speed of particle rendering. That wouldn’t affect benchmark scores
very much, but it would help out in hectic deathmatches.

Their opengl minidriver is already a fairly heroic effort – the current
PVR takes a lot of beating about the head to make it act like an OpenGL

Rendition v2100 / v2200
Good throughput, good fillrate, very good quality, good features.

A good all around chip. Not quite voodoo1 performance, but close.

v2100 is simply better than everything else in the $99 price range.

Can render 24 bit color for the best possible quality, but their current
drivers don’t support it. Future ones probably will.

Can do 3D on the desktop.

Rendition should be shipping a full ICD OpenGL, which will make an 8mb
v2200 a very good board for people doing 3D development work.

NVidia Riva 128
Very good throughput, very good fillrate, fair quality, fair features.

The fastest fill rate currently shipping, but it varies quite a bit based
on texture size. On large textures it is slightly slower than voodoo, but
on smaller textures it is over twice as fast.

On paper, their triangle throughput rate should be three times what voodoo
gives, but in practice we are only seeing a slight advantage on very fast
machines, and worse performance on pentium class machines. They probably
have a lot of room to improve that in their drivers.

In general, it is fair to say that riva is somewhat faster than voodoo 1,
but it has a few strikes against it.

The feature implementation is not complete. They have the blend mode for
colored lighting, but they still don’t have them all. That may hurt them
in future games. Textures can only be 1 to 1 aspect ratio. In practice,
that just means that non-square textures waste memory.

The rendering quality isn’t quite as high as voodoo or rendition. It looks
like some of their iterators don’t have enough precision.

Nvidia is serious and committed to OpenGL. I am confident that their
driver will continue to improve in both performance and robustness.

While they can do good 3D in a window, they are limited to a max of 4MB of
framebuffer, which means that they can’t run at a high enough resolution
to do serious work.

3DFX Voodoo 1
The benchmark against which everything else is measured.

Good throughput, good fillrate, good quality, good features.

It has a couple faults, but damn few: max texture size limited to 256*256
and 8 to 1 aspect ratio. Slow texture swapping. No 24 bit rendering.

Because of the slow texture swapping, anyone buying a voodoo should get a
six mb board (e.g. Canopus Pure3D). The extra ram prevents some sizable
jerks when textures need to be swapped.

Highly tuned minidriver. They have a full ICD in alpha, but they are being
slow about moving it into production. Because of the add-in board nature
of the 3dfx, the ICD won’t be useful for things like running level editors,
but it would at least guarantee that any new features added to quake engine
games won’t require revving the minidriver to add new functionality.

3DFX Voodoo 2
Not shipping yet, but we were given permission to talk about the benchmarks
on their preproduction boards.

Excellent throughput, excellent fillrate, good quality, excellent features.

The numbers were far and away the best ever recorded, and they are going to
get significantly better. On quake 2, voodoo 2 is setup limited, not fill
rate limited. Voodoo 2 can do triangle strip and fan setup in hardware,
but their opengl can’t take advantage of it until the next revision of
glide. When that happens, the number of vertexes being sent to the card
will drop by HALF. At 640*480, they will probably become fill rate bound
again (unless you interleave two boards), but at 512*384, they will
probably exceed 100 fps on a timedemo. In practice, that means that you
will play the game at 60 fps with hardly ever a dropped frame.

The texture swapping rate is greatly improved, addressing the only
significant problem with voodoo.

I expect that for games that heavily use multitexture (all quake engine
games), voodoo 2 will remain the highest performer for all of ‘98. All
you other chip companies, feel free to prove me wrong. :)

Lack of 24 bit rendering is the only visual negative.

As with any voodoo solution, you also give up the ability to run 3D
applications on your desktop. For pure gamers, that isn’t an issue, but
for hobbyists that may be interested in using 3D tools it may have some

Intel i740
Good throughput, good fillrate, good quality, good features.

A very competent chip. I wish intel great success with the 740. I think
that it firmly establishes the baseline that other companies (especially
the ones that didn’t even make this list) will be forced to come up to.

Voodoo rendering quality, better than voodoo1 performance, good 3D on a
desktop integration, and all textures come from AGP memory so there is no
texture swapping at all.

Lack of 24 bit rendering is the only negative of any kind I can think of.

Their current MCD OpenGL on NT runs quake 2 pretty well. I have seen their
ICD driver on ‘95 running quake 2, and it seems to be progressing well.
The chip has the potential to outperform voodoo 1 across the board, but
3DFX has more highly tuned drivers right now, giving it a performance edge.
I expect intel will get the performance up before releasing the ICD.

It is worth mentioning that of all the drivers we have tested, intel’s MCD
was the only driver that did absolutely everything flawlessly. I hope that
their ICD has a similar level of quality (it’s a MUCH bigger job).

An 8mb i740 will be a very good setup for 3D development work.



Filed under: — johnc @ 1:34 am

Just got back from the Q2 wrap party in vegas that Activision threw for us.

Having a reasonable grounding in statistics and probability and no belief
in luck, fate, karma, or god(s), the only casino game that interests me
is blackjack.

Playing blackjack properly is a test of personal discipline. It takes a
small amount of skill to know the right plays and count the cards, but the
hard part is making yourself consistantly behave like a robot, rather than
succumbing to your “gut instincts".

I play a basic high/low count, but I scale my bets widely – up to 20 to 1
in some cases. Its not like I’m trying to make a living at it, so the
chance of getting kicked out doesn’t bother me too much.

I won $20,000 at the tables, which I am donating to the Free Software
Foundation. I have been meaning to do something for the FSF for a long
time. Quake was deployed on a dos port of FSF software, and both DOOM and
Quake were developed on NEXTSTEP, which uses many FSF based tools. I
don’t subscribe to all the FSF dogma, but I have clearly benefited from
their efforts.


Back Again

Filed under: — johnc @ 3:06 am

Ok, I’m overdue for an update.

The research getaway went well. In the space of a week, I only left my
hotel to buy diet coke. It seems to have spoiled me a bit, the little
distractions in the office grate on me a bit more since. I will likely
make week long research excursions a fairly regular thing during non-
crunch time. Once a quarter sounds about right.

I’m not ready to talk specifically about what I am working on for
trinity. Quake went through many false starts (beam trees, portals,
etc) before settling down on its final architecture, so I know that the
odds are good that what I am doing now won’t actually be used in the
final product, and I don’t want to mention anything that could be taken
as an implied “promise” by some people.

I’m very excited by all the prospects, though.

Many game developers are in it only for the final product, and the
process is just what they have to go through to get there. I respect
that, but my motivation is a bit different.

For me, while I do take a lot of pride in shipping a great product, the
achievements along the way are more memorable. I don’t remember any of
our older product releases, but I remember the important insights all
the way back to using CRTC wraparound for infinate smooth scrolling in
Keen (actually, all the way back to understanding the virtues of
structures over parallel arrays in apple II assembly language…).
Knowledge builds on knowledge.

I wind up catagorizing periods of my life by how rich my learning
experiences were at the time.

My basic skills built up during school on apple II computers, but lack
of resources limited how far and fast I could go. The situation is so
much better for programmers today – a cheap used PC, a linux CD, and
an internet account, and you have all the tools and resources necessary
to work your way to any level of programming skill you want to shoot

My first six months at Softdisk, working on the PC, was an incredible
learning experience. For the first time, I was around a couple of
programmers with more experience than I had (Romero and Lane Roath),
there were a lot of books and materials available, and I could devote
my full and undivided attention to programming. I had a great time.

The two years following, culminating in DOOM and the various video game
console work I did, was a steady increase in skills and knowledge along
several fronts – more graphics, networking, unix, compiler writing,
cross development, risc architectures, etc.

The first year of Quake’s development was awesome. I got to try so many
new things, and I had Michael Abrash as my sounding board. It would
probably surprise many classically trained graphics programmers how
little I new about conventional 3D when I wrote DOOM – hell, I had
problems properly clipping wall polygons (which is where all the polar
coordinate nonsense came from). Quake forced me to learn things right,
as well as find some new innovations.

The last six months of Quake’s development was mostly pain and suffering
trying to get the damn thing finished. It was all worth it in the end,
but I don’t look back at it all that fondly.

The development cycle of Quake 2 had some moderate learning experiences
for me (glquake, quakeworld, radiosity, openGL tool programming, win32,
etc), but it also gave my mind time to sift through a lot of things
before getting ready to really push ahead.

I think that the upcoming development cycle for trinity is going to be
at least as rewarding as Quake’s was. I am reaching deep levels of
understanding on some topics, and I am branching out into several
completely new (non-graphics) areas for me, that should cross-polinate
well with everything else I am doing.

There should also be a killer game at the end of it. :)



Filed under: — johnc @ 12:27 am


Odds are that I will get back and just flush the 500
messages in my mailbox.

No, I’m not taking a vacation. Quite the opposite, in fact.

I’m getting a hotel room in a state where I don’t know anyone,
so I can do a bunch of research with no distractions.

I bought a new computer specifically for this purpose – A
Dolch portable pentium-II system. The significant thing is
that it has full length PCI slots, so I was able to put an
Evans & Sutherland OpenGL accelerator in it (not enough room
for an intergraph Realizm, though), and still drive
the internal LCD screen. It works out pretty well, but I’m
sure there will be conventional laptops with good 3D
acceleration available later this year.

This will be an interesting experiment for me. I have always
wondered how much of my time that isn’t at peak productivity
is a necessary rest break, and how much of it is just wasted.


The client’s IP address is now added to the userinfo before
calling ClinetConnect(), so any IP filtering / banning rules
can now be implemented in the game dll. This will also give
some of you crazy types the ability to sync up with multiple
programs on the client computers outside of Q2 itself.

A new API entry point has been added to the game dll that
gets called whenever an “sv” command is issued on the
server console. This is to allow you to create commands
for the server operator to type, as opposed to commands
that a client would type (which are defined in g_cmds.c).


We did a bunch of profiling today, and finaly got the
information I wanted. We weren’t doing anything brain dead
stupid in the server, and all of the time was pretty much
where I expected it to be.

I did found two things we can pursue for optimization.

A moderately expensive catagorization function is called at
both the beginning and end of client movement simulation.
With some care, we should be able to avoid the first one
most of the time. That alone should be good for a >10%
server speedup.

The other major thing is that the client movement
simulation accounted for 60% of the total execution time,
and because it was already compartmentalized for client
side prediction, it would not be much work to make it
thread safe. Unfortunately, it would require MAJOR rework
of the server code (and some of the game dll) to allow
multiple client commands to run in parallel.

The potential is there to double the peak load that a
server can carry if you have multiple processors. Note
that you will definately get more players / system by
just running multiple independent servers, rather than
trying to get them all into a single large server.

We are not going to pursue either of these optimizations
right now, but they will both be looked at again later.

All this optimizing of the single server is pushing the
tail end of a paradigm. I expect trinity to be able to
seamlessly hand off between clustered servers without the
client even knowing it happened.


Base100 Server

Filed under: — johnc @ 11:09 pm

We got 70 people on a base100 server, and it died after it
wedged at 100% utilization for a while. Tomorrow we will
find exactly what overflowed, and do some profiling.

Base100 is really only good for 50 or so players without
overcrowding, but we have another map being built that
should hold 100 people reasonably well.

I will look into which will be the easier path to more
server performance: scalar optimization of whatever is
critical now, or splitting it off into some more threads
to run on multiple processors. Neither one is trivial.

My goal is to be able to host stable 100 player games in
a single map.

I just added a “players” command that will dump the total
number of players in the game, and as many frags/names as
it can fit in a packet (around 50, I think).

Coop Play

Filed under: — johnc @ 6:33 pm

Coop play works now, including coop savegames. I also
fixed the savegame problems when under doors or on lifts.

We still have some game issues we need to hack around to
allow coop to be played all the way through the game
(like needing to pick up multiple power cubes, but still
leave them for other coop players to grab), and the monster
ai needs a bit of work for multiple players, but it will
all be there for the next release.


Quake 2 3.10

Filed under: — johnc @ 9:53 pm

Version 3.10 patch is now out.

A few more minor fixes since yesterday:

* qhost support
* made qport more random
* fixed map reconnecting
* removed s_sounddir
* print out primary / secondary sound buffer status on init
* abort game after a single net error if not dedicated
* fixed sound loss when changing sound compatability
* removed redundant reliable overflow print on servers
* gl_lockpvs for map development checking
* made s_primary 0 the default

Christian will be updating the bug page tomorrow. So hold
of on all reporting for 24 hours, then check the page to
make sure the bug is not already known.

All bug reports should go to Christian:

I have had several cases of people with lockup problems
and decompression overreads having their problems fixed
after they mentioned that they were overclocking either
their CPU, their system bus (to 75mhz), or their 3DFX.

It doesn’t matter if “it works for everything else", it
still may be the source of the problem.

I know that some people are still having problems with
vanilla systems, though. I have tried everything I can
think of remotely, but if someone from the Dallas area wants
to bring a system by our office, I can try some more serious

Something that has shown to help with some 3dfx problems is
to set “cl_maxfps 31″, which will keep the console between
level changes from rendering too fast, which has caused some
cards to hang the system.



Filed under: — johnc @ 8:49 pm

New stuff fixed:

* timeout based non-active packet streams
* FS_Read with CD off checks
* dedicated server not allocate client ports
* qport proxy checking stuff
* fixed mouse wheel control
* forced newlines on several Cbuf_AddText ()
* if no nextmap on a level, just stay on same one
* chat maximums to prevent user forced overflows
* limit stringcmds per frame to prevent malicious use
* helped jumping down slopes
* checksum client move messages to prevent proxy bots
* challenge / response connection process
* fixed rcon
* made muzzle flash lights single frame, rather than 0.1 sec

I still don’t have an answer to the WAADRNOTAVAILABLE problem.
I have made the packet stream as friendly as possible, but some
computers are still choking.

I managed to get fixes for address translating routers done
without costing any bandwidth from the server, just a couple
bytes from the client, which isn’t usually a critical path.

I have spent a fair amount of time trying to protect against
“bad” users in this release. I’m sure there will be more things
that come up, but I know I got a few of the ones that are
currently being exploited.

We will address any attack that can make a server crash. Other
attacks will have to have the damage and prevelence weighed
against the cost of defending against it.

Client message overflows. The maximum number of commands that
can be issued in a user packet has been limited. This prevents
a client from doing enough “says” or “kills” to overflow the
message buffers of other clients.

Challenge on connection. A connection request to a server is
now a two stage process of requesting a challenge, then using
it to connect. This prevents denial of service attacks where
connection packets with forged IPs are flooded at a server,
preventing any other users from connecting until they timeout.

Client packet checksumming. The packets are encoded in a way
that will prevent proxies that muck with the packet contents,
like the stoogebot, from working.

Tera MTA Prototype

Filed under: — johnc @ 4:57 am

Are there any quake fans working with the tera MTA prototype
at UCSD? I am real curious to see how some of my threaded codes
(qvis3, qrad3) would run on the MTA.


How Refreshing

Filed under: — johnc @ 5:18 pm

Wired magazine does something that almost no other print magazine
we have dealt with does.

They check the statements they are going to print.

I just got a “fact check” questionair email from wired about
an upcoming article, and I recall that they did this last
time the did an article about us.

Most of the time when we talk with the press, we try to get
them to send us a proof of the article for fact checking. They
usually roll their eyes, and grudgingly agree, then don’t send
us anything, or send it to us after it has gone to press.

Wired had a few errors in their statements, but it won’t get
printed that way because they checked with us.

How refreshing.


A small public announcement:

The Linux Expo is looking for:

1. People that develop games or game servers in *nix, and
2. People interested in learning how to develop games in *nix.

Either one should give a write to


Happy New Year?

Filed under: — johnc @ 3:29 pm

Some of the things I have changed recently:

* fixed the cinematics
* don’t clear config after dedicated server
* don’t reallocate sockets unless needed
* don’t process channel packets while connecting
* rate variable for modem bandwidth choking
* delta compress client usercmds
* fixed sound quality changing after intermissions
* fixed PVS problem when head was directly under solid in GL
* added r_drawflat and cl_testlights to cheats

There are a few problems that I am still trying to track down:

Map versions differ error
Sometimes connecting and seeing messages but not getting in
Decompression read overrun.

Of course, we don’t actually get any of those errors on any
of our systems here, so I am having to work remotely with
other users to try and fix them, which is a bit tougher.

My new years resolution is to improve my coding style by
bracing all single line statements and consistantly using
following caps on multi word varaible names.

Actually, I am currently trying on the full sun coding style,
but I’m not so sure about some of the commant conventions: don’t
use multiple lines of // comments, and don’t use rows of
seperating characters in comments. I’m not convinced those
are good guidelines.



Filed under: — johnc @ 4:54 am

A user just reported having their net quake problems go
away when they killed ICQ. I suppose it has never been stated
directly, so here goes:

Quake needs all the bandwidth that a modem connection provides
to play well. Any other program accessing the internet is
going to cause a degredation in gameplay, sometimes severe.

So quit IRC, ICQ, email, and web browsers before setting out
for serious net play unless you have ISDN or better.

Address Translation and UDP

Filed under: — johnc @ 1:33 am

I just spent a few hours working with a quake player that
still couldn’t net quake with 3.09.

It took a while, but I finally understand what is going on.

He could play net games on his local lan, but when he tried to
connect to remote servers, it would always fail and timeout
midway through the connection process, or at most a few seconds
into the game.

The situation was that there was a small network of computers
connected to an ISDN router that did address translation.

Address translation allows multiple computers to use the internet
through a single TCP/IP address. This is accomplished by having
the router perform some “invisible” port and ip renaming on
everything that goes out.

I think that is a rather evil thing for a router to do, but I
suppose I can see the incentive from an address pressure viewpoint.

Routers know when TCP streams begin and end, so they make sure the
port mappings stay constant through the entire thing, but quake
uses UDP packets (anyone who suggests using TCP for a realtime
game does not understand how the error recovery works), and the
router apears to be making the incorrect assumption that UDP is
only used for simple request / response protocols.

The router changes the UDP port while you are playing.


Now, a smarter router would only change the port numbers when it
was actually forced to by a collision, which would only be when
a connection was first opened, and everything would work out ok.

After I understood what was happening, I could devise a fix for
it. My simple fix was to make the server simply ignore the port
number for client comparisons, and assume that if a packet came
from the same IP address, then it is the same player even if the
port number changed. That worked, and he was able to connect in
to my modified server.

That has the distinct drawback of making translating routers or
proxies that do the port mapping correctly unusable by more than
one player at a time.

I could fix it completely by including a sort of port number in
each message, and having the servers match and update UDP ports
based on that. That would work fine, but at the cost of adding
a byte or two to everyone’s packets to help out people with bad
routers. You wouldn’t be able to tell a difference, but its the
principle of it…

I could make a server side cvar to force port fixing on, but that
would still not work for one class of users or the other.

I could make it client settable and have the client tell the server
on connect that it needs port fixing. That would work with no
bandwidth cost to anyone, but it would require users to know that
if they can’t connect to servers, then they should try to use the
fix translation option. Unfortunately, I bet that there are some
routers that exhibit this problem much less often. A drop every
ten minutes would be hard to attribute.

I could make port fixing on by default, but if anyone is on a
translated lan and another person tries to start a net quake game
to the same server then they will both collide and crash and burn.

I am probably going to add the extra bytes to every packet. Being
automatically robust on more people’s systems is probably worth a
microscopic loss of bandwidth. Two bytes is under one millisecond
of ping on a modem.

If there is some magic range of port values that I can use to make
these routers act better, let me know.

These changes will break the connection protocol again, so I am
going to hold off on the patch for a while.


Mods and 3.09

Filed under: — johnc @ 2:46 pm

Until we release the new gamex86 source code, if you want to
make mods work with 3.09, change GAME_API_VERSION to:


and recompile the mod.

This will let it run with the 3.09 servers. The API didn’t
actually change, I just had to bump that version number so that
we could detect the old q2test dlls still hanging around.


Patching the Patch

Filed under: — johnc @ 11:44 pm

We have rebuilt the 3.09 patch with a new version of the install
program. Some people were not able to run the installer because
a temp directory wasn’t setup correctly. There are NO OTHER CHANGES
in this, so if you were able to install the last 3.09, don’t bother
getting this one.

Too many Emails

Filed under: — johnc @ 8:22 pm

Please cool it a bit with the email to me unless it is really
important. I’ll never get trinity done with the email pouring
in the way it is right now…

Choppy Video Playback in 3.09

Filed under: — johnc @ 3:11 pm

The only widely reported problem with 3.09 is that the
video playback is choppy. The fix for the modem connections
reduced video playback to 10 fps. Its a one line fix, but
I’ll hold off on another version until a few more things

I am curious what the breakdown of opinion is on the rapid
patch releases. If one of the polling websites would pose
the question, I would apreciate it.

A more liesurely patch release would allow us more testing time, and
some problems (like this cinematic bug) would certainly be killed
before the public saw it, but I definately found a couple things from
the public that no amount of testing on our machines would have found.
Some things only showed up with 48 people playing on our servers for
several hours.

Once again, we really didn’t have a choice this time because of the
server crashers, but we are planning another release in two to three

I am happy to produce new versions fairly rapidly, rather than at monthly
intervals, but I know that many people are getting a little irate at
having to download new patches. There is a simple solution – if you
don’t want to be on the bleeding edge, wait a week after a patch is
announced and see how it is working for other people.

What finally helped me get to the bottom of some things was
just getting people with problems we couldn’t reproduce to call
me and let me send them executables by email until I figured out
what was going on. From now on, if you send a detailed problem to me,
include a phone number and times when you can be reached. I’m not tech
support, so you certainly can’t count on a response, but if you have
a nice repeatable case of a problem that is high priority for us that
we can’t reproduce otherwise, your personal help may be usefull.

BTW, does anyone know why Quake 2 became a hacker target? I can keep
fighting attacks, but spending my time there doesn’t help anyones game,
and there are a bunch of things that fundamentally can’t be stopped if
people really set their mind to messing up the servers.

New Version

Filed under: — johnc @ 2:37 am

new version:

This one has an install that makes sure things get where they
need to…


Gamex86.dll from q2test causes Crashes

Filed under: — johnc @ 10:03 pm

If Quake2 is crashing on you after upgrading, it is probably
because you still have the gamex86.dll from q2test in your
quake2 directory. The latest quake2.exe just started looking
in the exe directory as well as the game directory to make
debugging easier, and it brought out this problem. You should
only have gamex86.dll in baseq2 unless you are doing specific

I had a version check in there, but I never bumped the game
api version, so it was ineffective.

We are going to release yet another new version tonight.

The big news is that the modem connection and level changing
problems are fixed. They should have been fixed in 3.07, but
a timing error kept it from functioning.

I also found the “no such frame” warnings that scrolled by
under some circumstances. BFG gibbing crouching people would
cause it.

There are several other fixes in the menu and renderers as well,
so everyone should upgrade.

We are testing with 3.09 on our servers now, but I want to
make an incompatable change before releasing:

Right now, any client can send a “connect” message to the
server and grab a client slot. If they are the wrong version,
they will tie that slot up until they time out ot abort the
connection process.

I am going to force clients to send their version number
with the connection request, so that bad clients will never
take up slots.

That will require everyone to upgrade to 3.09 to play.

I apologize for the flurry of versions, but this was a forced
set of releases due to the server attacks, and lots of people
are on vacation here. It certainly could have been tested
better, but I thought it better to try and get something out

Check back in the morning for a new version…

BTW, we will release the new gamex86 source code after we are
convinced that we aren’t going to be making another patch
for a couple weeks.

Still no Crashes

Filed under: — johnc @ 5:34 pm

No crashes on any of the servers!

A few comments on some reported problems:

You have to press the “attack” button to respawn in deathmatch
now. This allows you to chat and go into the menu. I have
got several mails from people that are typing “kill” or
reconnecting to servers after they die…

Old savegames will NOT work with the patch. Just cheat yourself
to aproximately the same place you were before. The game included
config files for starting off at each unit. You can exec one of
those to get you close, then do “give” commands if you want to be
more precise. (bigguun.cfg, boss.cfg city.cfg, command.cfg,
factory.cfg, hangar.cfg, jail.cfg, mine.cfg, power.cfg, space.cfg,

I think several people are failing to get the gamex86.dll into the
baseq2 directory. if “fov 120″ doesn’t change your field of view,
the server doesn’t have the right gamex86.dll.

No Crash

Filed under: — johnc @ 5:17 am

Ok, two hours without a crash on four servers.

Here is a new patch:

3.07 and 3.08 can interoperate fine. All servers should upgrade
to 3.08, but if you gravved the 3.07 earlier today and only play
as a client and don’t need timedemo, you don’t nned to upgrade.

Problems with the 1.07 Patch

Filed under: — johnc @ 3:10 am

There were a few problems with the 1.07 patch:

Bodies stuck under doors caused a repeated explosion effect.
Timedemo was broken.
The servers crash about once an hour under full load.

I have the first two fixed, and I hope the third. The four
servers at Id are running a new executable.

If the servers don’t crash in the next couple hour or two,
I’ll put another release out.


Quake 2 1.07 Patch

Filed under: — johnc @ 10:07 pm

The 1.07 patch is out:

Please mirror and distribute this.

When submitting bugs, make sure you say that you already have the
3.07 patch.

Christian will go through and update the bug page when he gets back
from vacation next week.

This release does not fix all known problems. We intend to have
another release in a few weeks.


Quake 2 Bugfix Release

Filed under: — johnc @ 3:21 pm

We are going to release a new quake 2 executable that fixes the
malicious server crashing problems Real Soon Now. It also fixes a
ton of other problems that have been reported, so we are going to
have to give it some good testing before releasing it.

John Cash has two kids that would lynch him if he came in and worked
on christmas, so we certainly won’t be able to get a release candidate
together before the weekend. I am fairly confidant we will have it
released to the public on sunday.

I have been spending most of my time on trinity research but I have
still made quite a few fixes to Q2. John Cash has made many more (he
is just finishing up the IPX coding, among other things).

I have been doing a lot of testing over a proxy that gives me a very
bad ping (400 - 800), so I was able to find and fix two significant
errors with the prediction code.

The reason why you get a jerk when running forward and firing rockets,
blasters, or grenades is that the client side prediction code was
blocking you on your own missiles.

The jerky behavior on plats was due to a subtle error in the prediction
error interpolation. A prediction error was causing oscillations as long
as your latency, instead of smoothing out over just 100 ms. The plats
are now smooth as long as you aren’t dropping packets, and other
mispredictions are also handled much better.

There are still a lot of other things that will be fixed in an upcoming
release, but this will definately be an executable worth grabbing.

My fixes:

* zombies aren’t being removed properly
* joystick not in menu
* classname for rockets and bolts
* no screaming when invulnerable and in lava
* lowered water blend values
* clear powerups when dead (no more breather sounds)
* only play “computer updated” three times max
* mapname serverinfo now updated properly
* changed “rejected a connection” to “Server is full”
* made console “rejected a connection” a developer only message
* made WSAWOULDBLOCK warning silent
* max 10 packets/second during connection process
* set cl_maxfps to 90
* increased loading plaque timeout value to 120 seconds
* paused not default to 1
* no savegame in deathmatch
* fixed ; binding from menu
* no crouch when airborne
* removed half-baked $ macro expansion
* pause on landing before re-jump (fixes no fall damage bug)
* public server framework
* no ; comment in config files
* teleporter events
* lower hyperblaster damage
* don’t use PORT_ANY for clients!
* fix the entity number thing here
* don’t re-check CD after the first time
* auto cddir from cd scan
* dissallow kill from intermissions
* faster rockets
* less bfg effect damage
* remove packet command from client
* strip trailing spaces on cmd_args
* added protocol to serverinfo
* used CMD_BACKUP instead of UPDATE_BACKUP for phone jack
* don’t predict clip into your own missiles
* good netgraph
* validate userinfo for semicolons and quotes
* don’t copy savegames on dedicated servers
* also check current directory for game dll loading
* changed connect packet on client to differ from server
* bump protocol version
* fixed error interpolation on plats
* only respawn with attack or jump
* fov as a userinfo
* show weapon icon if fov > 90


Merry Christmas!

Filed under: — johnc @ 5:01 pm

The DOOM source is up.

Merry christmas!

———- contents of README.TXT ————–

Here it is, at long last. The DOOM source code is released for your
non-profit use. You still need real DOOM data to work with this code.
If you don’t actually own a real copy of one of the DOOMs, you should
still be able to find them at software stores.

Many thanks to Bernd Kreimeier for taking the time to clean up the
project and make sure that it actually works. Projects tends to rot if
you leave it alone for a few years, and it takes effort for someone to
deal with it again.

The bad news: this code only compiles and runs on linux. We couldn’t
release the dos code because of a copyrighted sound library we used
(wow, was that a mistake – I write my own sound code now), and I
honestly don’t even know what happened to the port that microsoft did
to windows.

Still, the code is quite portable, and it should be straightforward to
bring it up on just about any platform.

I wrote this code a long, long time ago, and there are plenty of things
that seem downright silly in retrospect (using polar coordinates for
clipping comes to mind), but overall it should still be a usefull base
to experiment and build on.

The basic rendering concept – horizontal and vertical lines of constant
Z with fixed light shading per band was dead-on, but the implementation
could be improved dramatically from the original code if it were
revisited. The way the rendering proceded from walls to floors to
sprites could be collapsed into a single front-to-back walk of the bsp
tree to collect information, then draw all the contents of a subsector
on the way back up the tree. It requires treating floors and ceilings
as polygons, rather than just the gaps between walls, and it requires
clipping sprite billboards into subsector fragments, but it would be
The Right Thing.

The movement and line of sight checking against the lines is one of the
bigger misses that I look back on. It is messy code that had some
failure cases, and there was a vastly simpler (and faster) solution
sitting in front of my face. I used the BSP tree for rendering things,
but I didn’t realize at the time that it could also be used for
environment testing. Replacing the line of sight test with a bsp line
clip would be pretty easy. Sweeping volumes for movement gets a bit
tougher, and touches on many of the challenges faced in quake / quake2
with edge bevels on polyhedrons.

Some project ideas:

Port it to your favorite operating system.

Add some rendering features – transparency, look up / down, slopes,

Add some game features – weapons, jumping, ducking, flying, etc.

Create a packet server based internet game.

Create a client / server based internet game.

Do a 3D accelerated version. On modern hardware (fast pentium + 3DFX)
you probably wouldn’t even need to be clever – you could just draw the
entire level and get reasonable speed. With a touch of effort, it should
easily lock at 60 fps (well, there are some issues with DOOM’s 35 hz
timebase…). The biggest issues would probably be the non-power of two
texture sizes and the walls composed of multiple textures.

I don’t have a real good guess at how many people are going to be
playing with this, but if significant projects are undertaken, it would
be cool to see a level of community cooperation. I know that most early
projects are going to be rough hacks done in isolation, but I would be
very pleased to see a coordinated ‘net release of an improved, backwards
compatable version of DOOM on multiple platforms next year.

Have fun.

John Carmack


The Quake 2 Public Code Release

Filed under: — johnc @ 11:41 pm

The Quake 2 public code release is up at:

This source code distribution is only for hard-core people that are going to
spend a lot of time pouring over it. This is NOT a how-to-make-levels-for-q2
type dsitribution!

This should keep a bunch of you busy for a while. :)


Big Bug!

Filed under: — johnc @ 11:04 pm


If you run multiplayer servers, download:

A serious bug got through… I thought the QuakeWorld master server
code was completely disabled, because I was planning on putting a
modified architecture in place in the point release. It turns out
that the code is still in there, sending heartbeats to a unix
machine here at id that isn’t even running a master server.

That wouldn’t normally be an issue – a packet every five minutes
from all the servers.


Cyrix has a new processor that is significantly faster at single
precision floating point calculations if you don’t do any double
precision calculations anywhere.

Quake had always kept its timebase as a double precision seconds value,
but I agreed to change it over to an integer millisecond timer to
allow the global setting of single precision mode.

We went through and changed all the uses of it that we found, but
the routine that sends heartbeats to the master servers was missed.

So, instead of sending a packet every 300 seconds, it is sending one


To a server, it won’t really make a difference. A tiny extra packet
three times a second is a fraction of the bandwidth of a player.

However, if there are thousands of network games in progress, that is
a LOT of packets flooding

So, please download the new executable if you are going to run any
servers (even servers started through the menus).

This isn’t the real point release – there are no new features or
bugfixes. I just went back to the release codebase and recompiled
with one function commented out so we wouldn’t have to worry about
introducing new bugs with our current untested code.

Btw, all bug reports should go to Christian (,
NOT to me, Brian, or Cash! We need a central point to funnel
things through. Hopefully we can set up a web page or something
to make public what we know about, so we can cut down on email


A Couple Things

Filed under: — johnc @ 10:11 am

A couple things I forgot to mention:

DOOM source. Still planned to be released Real Soon Now, but there is some work that needs to be done on it first to remove the sound engine, which was written by someone else.

Our Quake editor. It will be released with the tools, but it really isn’t going to be all that usefull to many people. Most people will be better off with one of the actively supported editors designed for normal machines.

There is no documentation (Steve Tietze at Rogue has talked about writing something, though). It is designed to run at 1280*1024 resolution on a fast, fully-compliant OpenGL driver. It was designed for high-end boards like intergraph realizm, 3DPro, and Glint boards, but it also runs ok on 8 mb consumer boards like the permedia II and rendition V2200. It will NOT work with voodoo or powerVR. It is unlikely to work with voodoo rush, because of framebuffer size limits, but it might work at a low screen resolution. It might be workable on RIVA cards if they do some fancy work disposing buffers between window renderings (they are a 4mb card, but the textures can stay in AGP memory, so it will almost be enough). I’ll work with them if they want to give it a try.

Right now, only 3Dlabs has a full opengl driver on win-95 (and it is a little flaky). All the other cards would require you to run NT. Over the next several months, most of the major vendors should be releasing full OpenGL drivers that work in ‘95, but there are no firm release dates.

A comment to the people complaining about the release not having Their-Favorite-Feature:

A software project is never, ever completely finished. If you wait until EVERYTHING is done, you won’t ship at all.

Would it have been the right thing to delay releasing Quake 1 until I had written the glquake code and the QuakeWorld code? Or we had gotten Paul to build us all new models? Or we had made all new maps that hang together thematically?

If we had, we would be releasing Quake right about now. It would be a much better game (it would be Quake 2), but all of the enjoyment that everyone has gotten from Quake would have been lost. It would have been the wrong decision.

Quake 2 is great, and it will get better yet after its release.

A reminder about “John Carmacks":

Anyone claiming to be me on IRC is lying. I have never been on IRC, and if I ever choose to, I will mention it here first.

If you get an unsolicited email from “John Carmack", the odds are high that it was spoofed. Every couple days, I get a mail bounce from someone who messed up on a spoofed mail, and I often get confused responses from people that I have never mailed.


Quake 2 Mastered

Filed under: — johnc @ 4:08 pm

Quake 2 has mastered.

Where we go from here:

Point release.

We should have a Quake 2 point release out shortly after the game gets in your hands. We intend to fix any bugs that turn up, improve the speed somewhat, and optimize for internet play in various ways. We will also be making several deathmatch only maps.

Deathmatch in Q2 has gotten a lot of lan testing (Thresh, Redwood, and Vik Long helped quite a bit the last week with tuning), but not much internet testing. There are probably gaping holes in it, but we will address them soon.

The deathmatch code in the shipping Q2 is also not designed to hold up against malicious users – there is no protection against clients being obnoxious and constantly changing skins, chat flooding, client-side cheating, or whatever.

Q2 does checksum maps on the client side right now, so cheater maps won’t work like they do in Q1, but cheater models and skins are still possible. I have some plans to combat that in the point release, but there are a lot of forms of cheating that can be implemented in proxies that are fundamentally not detectable. I can make it very painfully difficult for people to implement such things, but a very clever person with a dissasembler just can’t be stopped completely.

The server code and network protocol should be able to support ultra-large player counts, but I know I need to do some low-level work to get around operating system buffer limits before it will actually work. We will test at least a hundred players in a giant map for the point release, but we won’t actually address the issues of making a rational game at that level (chat hierarchies, team spawning, etc). I am very much looking forward to seeing what the user community creates on that foundation.

It is likely that the point release may have incompatable network protocols and savegames. Fair warning.

Q2 Demo.

After the point release, we will be making a new demo release. If you experienced compatability problems with q2test, or were unsatisfied with the quality in some way, you should look at the demo. The final product is much improved.

Q2 Ports.

We are commited to Win32 Alpha, Linux, irix, and rhapsody in that order. It is likely that a bunch of other ports will come later, but no promises. The presence of hardware-accelerated OpenGL on a platform will improve it’s odds a lot. Zoid will probably prioritize Q2 CTF over other ports, so hold off on bugging him about ports for a while.

Development tool release.

I will basically be making publicly available a subset of the directory tree that we will deliver to our licensees. All the utility source code, the game dll source code, and probably some example source media – .map files, artwork, model source, etc.

Q2 mission pack.

Most of the company will be working on a mission pack while Brian and I write tools and technology for trinity.


I am going to rapidly wean myself off of working with quake so I can concentrate fully on new directions. The evolution of the Q2 codebase will be left to John Cash (until the mission pack ships) and Zoid.

Everyone should keep in mind that any next-generation game that we produce is a LONG way off, so don’t start getting all worked up over it, ok?

For the curious, it does look like java is going to start playing a significant role in our future projects. All of the lightweight utilities will be java applications (some requiring OpenGL bindings). The heavy duty number crunching utilities will probably stay in C. It is still unclear how much of the game framework and the level editor we can get away with doing in java.


Work Log

Filed under: — johnc @ 5:33 pm

nov 1
* interpolate prediction error
* fixed farthest respawn
* removed backspeed
* no pickup weapons when dead!
* multiple crosshair pics
* fixed dropping items in wall
* disabled auto weapon switching in deathmatch
* respawn_time
* mroe precaches
* removed doubles

+ pitch around bug
+ ping calculation
+ are demos broken with prediction?
+ no footsteps if moving slow?
+ kill self command
+ no toss weapons in wall
+ footstep doubletap
+ clear gib flag on respawn
+ faster weapon deop times

precache talk wav
qbsp: MAX_MAP_AREAS when leaking?
flies effect on hyperblaster???
no slide under staircase
make blaster bolt move faster
make dedicated server sleep
connect to other server while playing bug
bit code net messages?
win95 joining
changeweapon work better when out of ammo
pop in maps?
highlighted numbers
sort image_t lists?
switching rules
brain effect
monsters source shots before testing line of sight
secret doors
obituaries from monsters
warp gate effect
teleporter effect
increase max switched lights
max lightstyles bug
shorter wav latency
beep beep on pc icon
blood jets from pain skins
gibs shoot up out of lava
savegame in water bug
font outlines not sharp in gl
weapons vanishing on toss?
longer pause after death before respawn
temp invulnerability after respawn?
telefrag not always working?
step up in water?
clear powerups on death
don’t hold a grenade when none left
l_health item precache?

nov 2
* fixed pitch clamping
* Com_PageInMemory
* fixed menu cursor time
* net connect when playing bug
* custom skins
* fixed server update without game update bug

+ bump version numbers
+ don’t go to half console until connected
+ delay before firing bfg
+ stairup allows wall climbing now
+ previous frame issues
+ don’t copy all of frame.packetentities

no server pause in dm
normalize skin texture coords for software
splashing sound when swimming in water
software underwater surfaces
nopredict option at server
scroll inventory
smooth step up
remove rand1k
fire func_explosive targets when starting in deathmatch

nov 3
* fixed divide by zero in kickback
* fixed overflow
* fixed walkmap up slopes
* bumped versions
* grabbed all cinematics
* fixed abort intro issues
* kill command
* fixed server status command
* more weapon precaches
* noexit by default
* autoremove some stuff in deathmatch
* make game initialization like – sound –
* pumped message loop during caching
* client persistant data

+ no drop weapons without ammo
+ no footsteps when walking
+ saved across kills
+ saved across levels in single player
+ client levelstate
+ clear client times on level change
+ shouldn’t be dumping unreliable messages
+ palette changes on cinematic

laod game should throw loading plaque immediately
not pausing when menu is up!
“don’t need” sound for no pickup?
min_intermission cvar?
run key should be a toggle
dropped items respawn
auto use items
powerup sounds
print sound precache pacifiers
sendkeyevents during loading?
better pingservers
alias models are lit outside ofdlight ball
“killed by” icon on scores?
don’t allow dedicated without deathmatch
map transitions, but gamemap doesn’t?
intermission spots with deathmatch
blinking health indicator
blinking computer indicator
move swapbuffers and add flush to glquake?
chaingun sounds off on NT?
test ping on serial port to serial port connection on win95
console prog commands
lower the scoreboard readout so you can read the obituary bessage

nov 4
* allowed bad sky textures
* cinematic tweaks in gl
* high res skins
* fixed duck speed issues

+ names are messed up
+ not disconnecting cleanly?
+ cinematic GL wrapping problem
+ cinematic quality issues
+ cinematic sound
+ cinematic end frame marker
+ switch sound to high quality for cinematic
+ is idlog aborting early on menu?
+ dropping items makes them respawn in deathmatch
+ crouch strafe is faster than forward
+ sink in plats bug
+ lower paused icon on screen
+ min firing ammo for dropping weapons
+ spawn explosions with a random yaw

gun puff animations wrong?
no status bar during intermission
are sky images freed properly?
wading sound
use 16 to 8 table is ref_soft for tga loading?
savegame off pak file
demo file parsing from pak?
check replace alpha value for mcd hack
screen update timer for software opengl?
instant items
item sounds
include texture source size in texinfo so other scaled versions can be made?
are cinematics using color 0?
send pak checksum to server?
fix dedicated_start
print dm rules on connect?
blink f1 and play sound
skill values!
loadgame from console
input based demos for profiling

nov 5
* fixed sink into plat bug
* fixed scoreboard display between deathmatch levels
* seperated game dll definitions

+ clamp low cin times

clear angles on loadgame
check client entering during intermission
seperate headers for monsters and players

nov 6
* s_testsound 1
* fixed streaming sound on 95
* streaming sound at full volume
* removed multiply from mixing
* khz change for cinematics
* blaster precaches
* fixed cinematic from pak streaming
* don’t use primary sound buffer option

+ precache blaster
+ dropped grenades on death shouldn’t respawn there…

set hostname by ip hostname
timegraph not right
rename map to start
check entire game without asm code
no mouse cursor when fullscreen
die with grenades needs to stop ticking sound
high quality sound directories

nov 7
* flag reorg
* teleporters
* put holdangles into pmove.pm_type

+ pm.touchents holds duplicates
+ damage anything flag
+ precache chat sound
+ teleporters at player spawn points
+ remove rocket fragments in dm

rename entity_t to rentity_t ?
teleport sequence bit to make ef_teleport reliable?
turn any event into a temp entity? (with or without angles)
unify sound starting as temp entities?
is time being over quantized by timegettime?
order events by priority
login / logout as events?
all sound channels as extra events?

trinity: objects should have enabler inputs as well as multiple
impulse targets

nov 8
* make random respawn option default and work
* don’t drop empty weapons
* teleport angles
* teleporters at player spawn points
* fixed telefrag self on respawn
* fixed userinfo on initial entering
* precache land sounds
* don’t change console height until connection packet
* a disconnecting client shouldn’t generate a badread
* remove rocket fragments in dm
* damage anything flag
* don’t call duplicated pm.touchents
* client parse entities array
* no weapon toss in single player

sound dies after several hours?
scoreboard faces
show killed by face on scoreboard
deal with old_origin properly
delay cdtrack play until connected
can’t escape out of loadgame menu when dead?

nov 9
* no client pmove at all with prediction off
* railgun crashes
* fixed rub stuck bug

+ gravity in pmove
+ control config is messed up
+ seperate client event processing from parsing
+ are baselines not working right?

make-item-selected command
notched look up / down commands
keyboard look
map name is messed up on start server
deathmatch character weapons
unify uses of ent / client / etc in source?
no player actions when paused
check incoming_acknowledged for pings
master servers
dm respawn isn’t clearing pitch?
teamed teleport destinations
teleport angles
# form for kicking players
make showinventory a stat bit, like layout
unify all data file references for easy downloading?
how to handle different quality versions?
if crushed in dm, respawned outszide world?
rename entity_t->flags to renderfx
checksum client to server messages
smooth out step up
does spectator work?
trinity: coarse radiosity source lattice for dynamic lights?
oldorigin issues
replace with a previous state send?

nov 10
* qe4 bad class parse bug
* grenade bounce generates two sounds
* playerstate delta
* qdata variable sound rates

check the replace mode alpha bug
replace all muzzle flashes with events
lower railgun

nov 11
* cinematic playback at variable sound rates
* qdata multiple video in single file fix
* only one token huffman bug
* demowaiting
* allowed individual reliable overflows
* make all clients invisible at intermission point
* mask high bits in client_t->name
* full delta compression

don’t show paused plaque in dm
check all serverinfo flags (skill, nomonsters)
crunch the scoreboard data a lot
dedicated net thread
random-not-nearest option
“public” variable
check the “using previous_origin” notes
echo chats to console?
weapon icon when hand is centered
game skills
better console keyboard editing
are demos broken with current delta compression?
should client string commands be ties to usercmd_t, to fix drops?
why is ref_gl.dll as large as it is?
allow round up and >256 textures on gl
mouse during cin
echo chats to console
is the dedicated server sleeping?
get mins/maxs from pmove

warn at startup if any spawn point is in a wall
make teleport pads glow?
never make water solid for entity cull purposes?
get sound position needs to check the valid count on entities
make secondary sound buffers the default?

pak file sorting by traces?
no colored lighting with mono lightmaps!!!

option to make picking up items not select

change stretch-raw to a dedicated full screen blit for better performance?

bsp to do:
water problems
allow any number of light styles
MAX_MAP_AREAS when leaking?


Comments about the Quake 2 Test

Filed under: — johnc @ 11:46 am

Many of the comments about the Quake 2 test are already being addressed. We expected quite a few of them, but the test has served its purpose of bringing in some good feedback that we couldn’t have predicted.

The final game will definately be better as a result of the test.

However, it certainly won’t please everyone. I am confidant that the majority will think that Quake 2 is significantly better than anything we have ever done before, but even if we please 80% of our potential customers, that will still leave a couple hundred thousand people thinking that we let them down.

I suppose that I have it the easiest there – I can always defend my technical decisions with specific discussions of my evaluations of the tradeofs that led me to the paths I chose. In fact, in a large number of cases when someone suggests something, I can actually say “Tried it. Didn’t work as well.”

Defending level design, artwork, or sounds is a lot harder. We can’t even always agree here at id on many of these issues, so we know for sure that we can’t please all the users simultaniously. All we can do is put talented people on the job and have confidence in their abilities.

Note: Q2TEST DOES NOT INCLUDE ANY HIGH QUALITY SOUNDS! That would have added another 15 megs to the demo size. Selecting high quality sounds just upsamples the existing 11khz / 8 bit sounds. There is a significant quality increase (at a slight speed and memory cost) with the full production sounds.

Quake 2’s goal is to be the best first person shooter ever. We are trying to evolve a genre, not move to a different one. If you don’t want a game that mostly consists of running around and killing things, you will be dissapointed. We are trying to be cohesive, but not deep. I have high hopes for the games that are atempting to aply our technology to other genres, but don’t look for it in Quake 2.

A quick plug:

if you have any interest in programming, you should look at Michael Abrash’s Graphics Programming Black Book Special Edition. It has just about everything he has written, from the ancient work on optimizing for the 8086 (still interesting to read) to the articles written during quake’s development.

I personally learned a lot from Michael’s early articles, and I was proud to contribute to the later ones.


Quake 2 Test

Filed under: — johnc @ 5:25 pm

I hope everyone is enjoying the quake 2 test.

Its always hard to release a version of a product that you know isn’t in its final form. There are plenty of things that are getting better every single day, but we need to chop it off at some point to let everyone test it out.

We will do another demo after we finish the full retail product, so if you don’t like looking at preproduction stuff, wait for that one.

Still, I am pretty happy with the test. I think Quake 2 is definately the most cohesive game we have ever done.

Don’t worry – just because the test doesn’t have multiplayer in it, it doesn’t mean that we haven’t been thinking about it. Many features in the quake 2 architecture are going to enable a whole new level of net play. It will take a few months after the full release for all the potential to start showing through, but just you wait!

The biggest changes to Quake 2 are internal. Anyone doing modification work on Quake is going to be ecstatic when they get to work with quake2. The game dll source code and all the utilities (including the OpenGl map editor) will be released shortly after the game hits store shelves.


Airport as Drag Strip

Filed under: — johnc @ 6:16 pm

Somehow we managed to convince the mesquite city services and police department to let us take our cars down to the municipal airport and run them down the runway to get radar speed numbers. Is that cool, or what?

So, how fast can we go on a 6000 ft runway and still stop before running off the end?

John Cash’s M3 just barely hit the 135 mph speed governer.
Bear’s turbo supro hit 144
My F40 hit 165
My TR’s left turbo exploded at 160 mph :(

Adrian, Todd and Paul couldn’t make it, so we didn’t get viper, vette or porsche numbers.

It took less than 2000 ft for the TR to do 160. We were fully expecting to do 200 mph in 4000 ft if things had held together.

We have a bunch of video and sound footage that we are going to digitize eventually. We made one run with a police mustang chasing after my F40. Guess who won.

The F40 is a very, very durable car. I made six runs around 160 mph, and it didn’t even fade. Same thing on a racetrack. Lap after lap without any changes. My TR makes 1100 hp for twenty seconds, then explodes…


Last Two Months of Work Log

Filed under: — johnc @ 5:02 pm

Here is the last two months of my work log.

A * entry was completed on that day
A + entry was completed on a later day
A - entry was decided against on a later day.


—– at siggraph —–

aug 5

* fix qe4 autosave
* merged qlumpy into qdata, save seperate files
* changed quaked to use texture directories

+ fix leaktest option
+ show texture directory on inspector window
+ show full texture name somewhere when clicked on
+ texture info overrides

remap maps to share common textures?

aug 6
* qe4 texture directories
* fixed vid_restart
* hacked alpha colors for cards without src*dst
* fixed qdata vc compiler bug in arg parsing
* qe4 surface inspector

aug 7
+ add animation frames to bsp file texinfos
- make bmodel frames just add to texinfo?
- should msurface flags hold the texinfo flags?
+ make window content implicit if any surfaces are trans
+ nodetail bsp
+ select face option in qe4
+ use monsterclips!
+ gl fullbright textures are still 2x brightness

moveable alpha surfaces
merge find texture dialog into surface inspector
fix qdata unix directory stuff
get rid of mod->skins, use mod->images

aug 8
* added origin brush support to old bsp for raven

+ add edge planes for brush hulls
- rate is broken – inventory fix

aug 9
* combined bsp tools into a single vc project
* new texture animation solution
* make any com_error drop the loading plaque
* tools and quake2 work with new bsp format

+ combine project files of bsp tools
+ anything translucent is automatically a detail contents
- duplicate texinfo for animations?
+ store out contents from trace!
+ arbitrary visleafs mappings
+ scanmaps option for pak file building of textures
+ delta lightstyle controls from server
+ max moveleafs problem
+ make r_dowarp a server passed variable?
+ why is hunk_begin different in software?

don’t forget to set SURF_NOSUBDIV on warps and sky!
compress ff in visdata as well as 0?
trinity idea: model light haze around every emiter
trinity idea: allways model volumetric lights by rendering back sides
do a wavy specular water novelty
allow arbitrary chained lightmaps on a surface?
game.dll controlable particles
player sounds when moving? (breathing / footsteps / hitting walls)
rename .bsp to .bs2 ?
high frame rate run turn chunkiness

aug 10
* trans33, trans66, flow flags in gl
* damped warp modulation in gl
* ref_soft running with new data

+ shots are exploding on the sky again
+ auto set window contents if translucent
+ don’t set qe4 texture unless notexture
+ try new console background
+ finish animation cycling

detail brushes could be extended to be destroyable
new texture specification by three points?
check -tmpin -tmpout in bsp utils
rename texinfo to surfinfo?
pitch change during jumping
minimized window notification when a new client joins?
should origin brushes be included in bsp file for completeness?
use nodraw flag
pitch change when ducking
qrad light bleeds

aug 11
* don’t set qe4 texture unless notexture
* don’t set qe4 texture on cancel unless changed
* grabbed new menu and console
* invert mouse off in default.cfg
* all software flags
* mist contents

+ imagelist command in software

trinity: save out projection outlines from editor for textures
add a 5th control axis (and 6th?) for spaceorb ducking
gl: don’t keep lightmap blocks around in main memory?
entities not visible (or only visible) to owners
look in direction other than motion for hmd
quake as root directory problem
dir command
software surface / edge allocation issues

aug 12
* qe4 project on command line
* qe4 rshcmd replacement
* qe4 select face
* qe4 avoid multiple autosaves
* qe4 region selected brushes
* bindlist command
* imagelist command in ref_soft

+ leaktest
+ load game.dll from gamedir

pendulum motion
no jump on lava floor?
16 bit wall textures

aug 13
* cls.fixedimage support
* no frame before cinematic fix
* menu during cinematic fix

+ ingame cinematic state
+ indemo cinematic state
- move fraglogfile into game dll
+ layout language beyond simple centerprint
+ killserver needs to kill demos as well
+ must kill cinematic after menu, or restart palette
+ disconnected can be either at a console or running the demo + intro cinematic needs to be part of the game

force nolerp lag?
put ip filtering in game dll
handle localmodels explicitly, rather than as *num
don’t send heartbeats if not running a network game?
move viewmodel for all accelerations, including jumping and landing
fade out centerprints
design quit screen to allow addons to get credits
be consistant with window title bars
mp3 audio
qe4: downsample option, nomipmap option

aug 14
* qe4 project dialog fix
* intermission spots and movement
* hud transfer framework

+ micro levels that just play cinematics?
+ BUTTON_ANY option

remove oldorigin
use static arrays for map elements in renderers?
unit level statistics

aug 15
* smart background clear
* worked around 100% viewsize floating point bug
* increased base surface cache size
* unified server commands and prog commands
* fixed same level reload bug in ref_soft

+ are lightmaps allways being saved, even if all black?
+ is notify box used for anything?
+ toggleconsole when connected to a net game
+ server needs to be able to send staticpics
+ draw to front buffer without swapping option
- can game.dll register commands?
+ direct sound and keyboard restart so ref can destroy window
+ loading plaque on local system doesn’t wait for hud erase

frame flicker option for evaluating missed VBL?
way to add remote commands on client side by scripts?
check client entering during intermission
moveable transparant bmodels
use sneaking in shadows to let players get a good look at more monsters
translate cinematic to greyscale instead of blanking?
remove zombietime?
are userinfo updates getting sent?

aug 16
download static screen images?
+ how to change semi-protected variables without killing server?
+ how do demo servers progress to the next server?
+ how does the client distinguish between a demo server?
parm to map command?
+ demo servers have special console behavior and don’t warn on game restart
+ do not allow remote connects to a demo server
+ no loading plaque if fullcon
+ cinematic trailing pic for victory
+ demo view angles?
+ text before next level after completed
+ replace draw_beginrawscene with setpalette?
+ keys should go to game when running cinematic, not console
+ does the console key serve as a skip-demo key on real servers?
+ need to flag unit ends for stats, vs simple transfers
+ pause demos and cinematics while menu is up

visible que on players when typing and when lagged?
make sure there is never a server running if client is fullcon
must force full graphics reload on game change
don’t require full precache of all weapons?
demo servers won’t work for remote connections, because packets can be dropped
prevent map / demomap changes without killserver
map demo1.dem during game messes up
victory freeze

aug 17
* demo angles
* fixed initial lightmap cache value
* disconnect now does an ERR_DROP to kill server as well
* button_any support
* bad fov problem

never nextserver on finale
blaster autorepeat problem
end cinematic loading flicker

aug 18
* lightmap builing errors

+ qe4: build in detail mode
+ animating textures
+ no different quantities on items?
+ target_secretcounter

the inherant problems of simplicity by complexity

aug 19
* leaktest

+ min clamp extents

———— Kansas City —————

aug 23
* cluster code

- boxcontents?
+ dump rgb lightmaps for software?
+ alias model aspect ratios different in software and gl?

share data between cmodel and ref
triangulate mightsee on vis?
malloc all cmodel arrays?
DONT_PRECACHE flag for player weapons?
make an ERR_DISCONNECT that doesn’t print ERROR: ?
don’t load entire map twice in cmodel and ref!
show clusterviscount for bsp time optimizations?
server/client communication for skin overrides

aug 24
* qe4 slow startup
* qrad

+ detail clip brushes?
+ extra brush clip planes

change qdata colormap to not use 0 and 255 for win palette

aug 25
* fixed water bsp bug! yeah!
* new tools in production
* view pitching with running
* weapon turn lagging
* debug graph

+ screen sizedown is not clearing again
+ animating textures
+ weapon change sounds should be server side
+ QE4: surface inspector apply is slow

qe4: seperate “make current texture” from “make and set all”
currentmodel name problem in gl_model
userinfo changes

aug 26
* debuggraph on top
* better bobtime / bobcycle
* face seperation overrun bug
* fast surface dialog
* show detail on camera title

+ link commands for playing from the cd

qe4: view menu checkboxes are wrong

aug 27
* fixed off-by-one cluster count
* fixed surf/content bit mismatch
* gun bob cycles
* falling pitch change

- make a fat pvs for rendering?
+ trace needs to return contents
+ rendering beams
+ delta lightstyle controls from server
+ finish animation cycling

QE4: deleting brushes doesn’t count as modified?
initial time on spawn Tent
underwater caustics
make all bobbing effects cvars
title on inspector is broken for textures
moveable alpha surfaces
don’t forget to set SURF_NOSUBDIV on warps and sky!
freeze map just sets a HUD of the victory screen
server scoreboard

aug 28
* fixed entity culling on gl
* sorted axial and edge bevels on all brushes

+ entity culling in GL
+ imagelist should have the downsampled sizes
+ software should dump rgb lightmap data

an origin brush will never change a texinfo?
NO! the offsets can change
are brush numbers messed up because of removed brushes?
plat push into floor
use textureisresident in imagelist?
load mip levels seperately
duplicate planes
make set detail not work on entities
trinity: pivot feet! general atmospherics!
ray trace: texture+s/t for each sample, hardware reconstructs
walk up stairs by slope hitches up
animating textures
QE4: use gentextures
QE4: flush all textures option

aug 29

aug 30
* changed snapnormal
* fixed BUTTON_ANY
* unix makefile
* pic server
* runcinematic call
* console over cinematic fix

+ console key during game cinematics
+ version number for quake 2?
+ cinematic set palette needs to clear screen

use cluster level leafs for sound radiosity
jittered texel centers?
trinity: continuous textures, surface cache on all
make net, pause, and drawcenterstring HUDs

aug 31
* fixed trigger_always
* game dll by search path
* cinematic NULL bug
* help computer

+ get rid of datablocks?
+ dll init must clear the persistant data
- savegame needs to save game.dll name
+ save directory?
+ put pcx_t into qfiles.h?
+ unify all hud work into g_hud.c

set command with flags?
should “path” be renamed to pathlist?
trigger_allways should be fixed size
somehow don’t resend big deltas (scoreboard hud) until ack?
client side feedback

sept 1
* QE4: bug with texture attributes on non-visible surfaces
* stack bug for initial light maps

+ pink lightmaps?
+ alt-tab should only minimize if full screen
+ version as command and var?

splashing sound when swimming at surface?
brains make view roll around
wasted polygons outside maps
vis decompression overruns?
make * model names visible
GL_MipMap overwriting?
trinity: proper biped walking
increase range of mouse slider
gun shock based on damage source

sept 2
* serveronly semeaphor close check
* error during initialization messagebox check
* software rendering default wall image
* unify slidestepmove

+ progress bar
+ roll angles are getting set on rotating models
+ blinking lights on bonus items
+ alpha test fix

bug with loading a new map after an errored out map
qe4: turn region off for new map

gun should show vertical acceleration on lifts
view angle turning based on impacts
muzzle flashes
trinity: fully compressed textures need to compress the mip
levels as well, instead of generating them from the most
detailed form.
trinity: different packing options to layout all the texture
blocks. Square, thin, individual, etc.
trinity: investigate performance of background disk paging and
clustering of texture blocks into disk allocation units
trinity: texture connectivity graph for prepaging?
trinity: speculatively upload things that might be seen next
frame to balance uploading?
max upload, use lower mip levels if needed?

get rid of all the gl lightmaps in main memory!
allow jump up off bottom of water floor to give more velocity
merge net_udp and net_wins

slippery slopes
learned something: upload mip levels backwards

sept 3
* fixed scalloc size 0
* box on planeside fix
* remove 0 and 255 colormap references
* don’t allocate texinfos for empty texture names
* fix the initial teleport spawn timing bug
* sinking into plats
* exaggerate stepping when crouched?

+ allow pics off screen (status bar off bottom of screen)
+ check control configuration
+ make moveup jump
+ animating textures
+ base window not very noticable
+ any flowing?
+ fix deltaangle hack in putclientinserver
+ move copytooldorg to prepworldframe
+ demos
+ r_speeds include particles
+ escape should pause demos

reduce acceleration on low grav levels
moving translucent objects
timedemo leaves console in attract key mode

underwater speed is too great
flex legs intentionally on plats?
forward when facing an obstruction directly should not slide
clip brush fragments in base
exit button clip stuck problem
no such oldframe -1
melee attacks out of range
rename g_client to p_client
merge cl_fx and cl_tent
deal with oldorgs better
software menu black screen flickers
rethink scrap allocation
crouchwalk up slopes is fucked up (stair uping)
r_stats include bind counts
change lightmaps into images
make gl_bind() take an image, so it can reference sizes
script parsing should take /* */ and line continuation
reduce skies?
3dfx opengl: detect thrashing and split the cache?

demos don’t read from pak files??????

muzzle flashes
forward / backwards airlocks
better button representations
brighter primary colors

sept 4
* mcd alphatest workaround
* gl_finish
* gl_dynamic
* fixed crash without basedir

+ savegame harness
+ loading plaque
+ proper alias bbox in gl
+ light feedback for server
+ 3dfx vid_mode problem
+ remove MAX_SCRAPS
- arbitrary skin support needed for power armor

sort entities by texture
segment skies up more to get better caching?
circle monsters pvs
are gl sprites double bright?

sept 5
* fixed all angle clamping issues
* allow look up / down to exactly 90 degrees

red numbers on status bar

sept 7
* timedemo attract flag bug
* multiple model entities
* 8/16 bit updates
* merged delta with baseline

+ destroy windows on each ref start
+ server time clamping issues
+ client light level different between refs
+ remove version command
+ animating textures
+ run from cd option
+ toast mergedemos

flags not used in entity-state?
airborne frames for everything?
don’t send player entity to owner in most cases
client quit dropping prints extra messages?
trinity: track and field style extra fast running?

sept 8
* fixed time clamping issue
* removed win32isms from snd_mix.c and snd_dma.c
* optimized dsound locks

+ scale texture stats by texture size
+ not autodetecting PII for mono lightmaps?
+ put swaps from ref into qshared?
+ walk backwards when looking straight up?

seperate #define for asm code?
swim up with jump key
window close box
reload textures only on context recreation
do a stereo gl implementation on intergraph?
item using….

sept 9
* freed textures on gl shutdown
* fixed pitch bounds check merge bug
* cleanup sound code
* cut default maxclients
* cut update_backup

+ soft fullscreen failure on ingr
+ error not closing window
+ cds window set on top of taskbar
+ fullscreen in mode 0, set mode 1 = crash
+ alt stuck down after alt-tab

fix vis expansion problem
trinity: shimmering heat atmospheric effects
need a remove command builtin for game logic
get all texture extension numbers into gl_image
vis decompression overrun
window doesn’t offset in non-fullscreen modes

sept 10
* don’t precache player model in single player games
* dynamically change maxplayers

+ move null files into a seperate directory?
+ alt sticking
+ maxmoveleafs error

teleport in flash is still wrong on second level
stuck on wall with low grav jumping
allow minimize?
win95 memory paging
still have some tjunctions

sept 11
* cddir

+ must save status upon entering a level if it was a new spawn
- each map has a unit number and a level number

is changing skill/etc going to be a problem while demos are running?
demos in pak file
don’t use virtual alloc!

sept 12
+ tag_game, tag_level
+ new game must clear
- need to save game state at last level entered as well as exact current
+ save level on exit…
+ spawnpoint to game dll wrong?
+ collapse sv_phys into something else?
+ skip all pixelformat stuff on minidrivers?

get rid of Com_SetServerState() ?
status command that prints out server and client state?
don’t allow anything but CRC checked pak file access in a demo
client userinfo change updates
worry about cvar info string lengths
make sure IP is visible to game
track deaths as well as fregs?
view shaking from explosions?

sept 13
+ skill levels need to be archived with server state!
+ angle clamp on server is broke again
+ don’t shrink status bar with window
+ make sure all char * in edicts are copies, not just references
+ difference between reentering a level and reloading it

check all savegame files for disk space errors.
current is automatically updated whenever a level is exited
archive the level being exited if not leaving the unit
save the map to be entered, SKIPPING ANY CINEMATICS!
end of game will not have a final map, so don’t save
savegame does NOT update current, the level archive and server
is written directly to the new directory

new game

single player game
on death + press, bring up loadgame screen
on death + press, respawn
on death + press, respawn

need to have the game start up without TCP/IP unless asked for

dir command with sys_find*
ping equalization?
set userinfo->ip on each server connect
high quality / low quality sound option in menu, create a special sample for testing
fix svs / sv to be more game/level oriented
make coop games allways four player?
wav lag seems worse

sept 14
* qdata grab alias numeric suffix
* menu architecture

+ make a portal entity
+ connect doors to portal entities
+ treat portal contents like windows
+ flood fill leafs, but stop at portals to count areas
+ if actual leaf with portal contents should chose any area next + to it
+ each portal brush should have exactly two areas bordering it
+ server sends over a bit vector of areas visible to player
+ use area visibility as fast reject for line testing?

+ should portal entities remain seperate, or just add a portal + field
+ to doors?
+ builtin: SetPortalState (int pnum, qboolean open);
+ portals MUST go in the structural bsp!
+ each leaf has an area field
+ each portal has two areas it connects
+ all other data can be derived
+ areas have the list of portals
+ area * area * 2^portals == too large!
+ must do dynamic flood fill
+ most portals will be closed, so flood fill is fast

+ game pause
+ no status bar after death

allow higher precision bmodel angle specification?
put v_idle back in?
super crouch jump?
establish a client connection at startup to avoid localconnects?
more bright areas in the game for contrast?
throbbing health status pic
weapon cycle command
bigger font?
allways have visible blood particles by face when hit?
bounce health status around when hit?
radius of alias model dlighting is greater than surface dlighting

trinity: use mouse curssors for ui stuff?

menu_move, menu_down, menu_up, menu_change, menu_slide

trinity: software trilinear with second pass?
only works if vertex lighting

sept 15
+ cinematic paking!
+ r_dspeeds should include translucent time
+ alt key stuck donw after alt-enter
+ bonus flashes

texture releasing from maps isn’t uniqued
scissor triangles
faster z clip
make autoexec.cfg work differently because of demos

sept 16
* finished box sweeping code
* fix the automenu key problem on bad cinematic
* blinking black screen palette set issues
* send dowarp over from server
* fixed color 0 grabbing
* bonus flashes
* Q_fabs

+ dedicated server

drop stair climb in air height

sept 17
* fixed qe4 texturepath bug
* qe4: show in use textures even when showing a wad
* utils: fixed gamedir to allow nested quake2 directories
* moved env out of gfx, so gfx is all source files

malloc tags for game and level
clone detail brushes should remove detail flag
make timedemo a server connect thing
ktest.reg bad model

sept 18
* finished code dump
* dedicated server
* removed all dash parms
* texture paging research

+ examine ambient sounds
+ key clear events doesn’t clear everything

sound streaming
bsp hint brushes (SURF_NODRAW?)
ip cvar for multihomed servers
ip userinfo for clients
report dash parms on cmd line?
menu on top of cinematic leaves crap
color 0 is still broken on NT

allow clients to connect to the server even if it is not
running a level?
local client is allways connected
clients are only kicked when the entire server is shut down
or they connect to a remote server

sept 19
* basedir / cddir exec problem
* moved edict allocation into game

+ only change yaw on riding bmodels
+ city3 software crash
+ odd pause before connecting to map
+ !!!SV_PointContents needs to check entities!!!
+ areaportal numbers
+ move spawn/free into game logic?

are sound starts lagged by 0.1? only lag offsets?
get all cvars for game into one place
send objects only to owner or vs versa flag
loading plaque from post cinematic “you win!” screen
QE4: fix the idle redraw problem
vis decompress overrun
get rid of zombie state?

sept 20
dead air conditioning

sept 21
* areaportals!!!
* model contents for moving water

+ use different decompression buffers for pvs and phs?
+ fix headnode issues

get rid of fat pvs completely?

sept 22
* fullsend headnode done properly

+ animating textures
+ must check all cluster areas for doors

-1 cluster issues?
more dramatic railgun spray
check all the trace flags to see if they are still needed

sept 23
* fixed bmodel cluster overload
* double areas for doors
* drawworld 0
* bmodel sound origins

+ rename “globals” in game to ge?
+ remove limits on max entities in packet?
+ better way of tracking static sounds, so they can be turned off?
+ object looping sounds?
+ machine sounds in fact2
+ fix look up / run back bug
+ add more packet buffers to avoid “dumped packet” warnings?
+ dll basing information for function pointers

finish status bar and inventory
use areas for multicast
hint brushes
eliminate baselines by allways tracking client’s knowledge of all ents?
qdata model frame auto-number issues
snap stuck view when dead?
set_ex command to set info status
game dll version number?

sept 24
* fixed area bug for headnoded entities
* fixed noclip outside world view with areas
* fixed gl_lienar getting stuck

+ cull sound spatialization by area?
+ don’t save level if going to a new map instead of gamemap
+ put .pak format into qfiles
- make chaingun do less damage per bullet than the machinegun?
+ check entity sound overriding in fact2
+ software glitches with areaportals
+ move spawn and free into game
+ weird blaster trails
+ make sure doors open / close areas properly reletive to sound starts

guarantee string fields are never NULL?
client userinfo updates
software skybox rotation
make -ip work as a cvar
ip as userinfo
areaconnected game primitive

server engine manages
connection establishment
collision detection
console interface
map/game transitions

cache pvs/phs expansion
falling damage
manual mipmapping on skins?

sept 25
+ should loadgame allways unload and re-init game dll?
+ load level with spawnpoint is different than load game
+ good sound control instead of staic sounds
+ move use / drop into game code
+ texpaint autosave
+ noreadlevel cvar?

localconnect sometimes needs to be reissued
non-axial triggers
rename and g_client to p_*?
don’t nextlevel from “you win”
userinfo visible to progs
too many edicts in an area for sv_world?
spawn flashes are still wrong

——— in seattle ———–

sept 28
+ pain sounds?

window crunching on win95, due to order of DX operation?

sept 29
* texpaint: backup files
* texpaint: size dialog on new skin
* texpaint: auto save
* normailized translucency level in gl
* fixed func_group entity miscount
* fixed target_changelevel use clearing
* fixed pointcontents with moving entities
* fixed spawnpoint storage
* use areas in multicast
* removed ambient sound calls

+ >512 entities
+ view kick even without move kick
+ blaster spawn point
- include sky, skyaxis as player_state variables?
+ transfer player health on each spawn and level exit

set spawntemp strings to “” instead of null?
alias save sequence number issues in qdata and texpaint
client userinfo
different blaster flash against flesh
rename edict_t to entity_t
init without IP
never same pain animation twice in a row?
map during loading bug
remove old_origin
makeuserinfo / makeserverinfo?
auto-loop entitystate sounds?
!!! must include full path of predicted positions in fatpvs !!!
muzzle flash effect in entity_state?
jail4a iris door problem
MOTD console variable
QE4: scrub out partial detail brushes

sept 30
* better host_speeds
* fixed bsp slowdown bug
* made Draw_Pic: bad coords not an error
* avoided double game init on loadgame
* moved serverflags into game
* fixed pause on initial connect from dumped loopback
* inventory rate bug
* client view rolling bug on level change
* texture animation

+ more barrel explosion damage
+ archive all sensitive server cvars
+ fix inventory system
+ target_goalcounter
+ get rid of packet_entities_t as a structure
+ cinematic message name bug
+ up as jump?
+ more light!
+ weapons need to be more obvious in the world. light pulsing?
+ make first backtrack into an area with known landmarks

fade center printed text
still have color 0 problems under NT
allow game to select a client slot for connections?
combine g_player with something else
save view angles in savegame somehow
more function pointer checks in loadgame
watch out for different maxclients on loadgames
increase alias model shading level?
use userinfo_sequence
HUD strings need to be tag_game, not level
make ping time available to game?
different faces on status bar for male/female characters?
save health across levels
stuck in water currents in base2?
clamp max fly velocity
fish in ait on base3
guys shooting through force field sometimes in bunk
rename gl_mesh to gl_alias? or r_alias to r_mesh?
jump / crouch key placement
auto view centering
more edge on mipmapping in software?

oct 01
* got rid of precache_
* got rid of SV_Error
* !!! config strings !!!

+ are lightstyle strings being dynamically freed properly?
+ pause
+ remove SV_Error?

!!!move timedemo to server
should setmodel take an index?
smart precache of weapons?
long crawls are annoying
skin reference counting
does leak test work?
bad surface extents levels
make sound and image names include extensions?
!!! how to download implicit images ??? !!!
!!! demo recording with deltas needs to wait for full update !!!
make timeout at least a minute?
multicast_all_r for configstring should go to connected as well as active clients
string encode SKY_AXIS and SKY_ROTATE in SKY?
it will be possible to get an index for an item not yet known
because of reliable / unreliable issues
block_until_reliable option?
supress flag on HUDs to allow cheap blinking?
rename “map” to “start"?
extra packet dumps still happening on map start
remove CL_MapEntity
move baselines into a parallel array?
don’t expose svc_tent / muzzle flash numbers to game?
dropcommand cvar to restart crashed servers?
better box top walk jumping
full death cycle for player
inventory is persistant, per-client state.
no high step jump out
pain and death animations should be based on impact direction and
total damage in that frame
check on virtual alloc / commit issues
weird bmodel edge stream problem
increase numstacksurfs / numstackedges
clear sound buffer on loading plaque

oct 02
* don’t lerp blends
* sum damages for end of frame
* damage kicks scaled by health
* don’t run more than one frame at a time
* fixed alias model brightness in software
* equalized light feedback value between refs

different console background
infantry melee attack?
still get stuck sometimes
mono lighting should not color alias models
put away restart game menu
rename qmenu and menu.c to something common
numeric keypad controls?
monster hearing not right?
walk into player = allways attack
are infantry fullbright flashes not working?
level to level health
lighting feedback still different in software and gl
player pain sounds
animate translucent textures
better monster sight / hearing
make sure switches are animating
echo center prints to console
PHS or PVS activated guards?
rotate with textures option in QE4 for crates
do mynoise entities leak on level transitions?
crouch strafe is still full speed

oct 3
* game pause
* pain sounds
* save health between levels
* moved baselines to a parallel array
* software screenshot directory
* 1.4k packets only!
* map command while paused?
* pos sound overriding
* sound area testing
* seperate pvs / phs static arrays
* cleared sound buffer when disabled for loading
* PHS calculation bug

sound improvements since q1
respatializing on moving entities
sub frame start commands
looped sounds are deterministic
sounds are removed by area and PHS
looped sounds sum

muzzle flashes!
bullet impact puffa?
why can’t you fire a single machinegun bullet?
avoid loading the map file twice for server and refresh
option for multicast to PVS for effects instead of PHS?
remove sbar2 sounds
centralize all communication between client and server sides
warnings for improperly looped sounds
multiple speaker entities with the same looped sound
will just increase the range
target_speaker checkbox for player locals (voiceovers)
base1 - base2 - base1 - base2 -death goes to base1
paused level to level bugs
yaw towards killer
monsters not going to ideal yaw when shooting?
blood jet muzzle flashes
iventory update on level change
keepalive messages while precaching
no red flash on deaths?
initial and final trail parts
server quit doesn’t get the disconnect message out
option to have dlights backface cull
extended sound bytes
release mouse when paused?
nver let server be connected without local client
!!! possound needs to also take an entity number !!!
slow water wading

oct 4
* map_noareas
* target_speaker

handle bmodel origins on client side, search for good area
check localsound pos starts
make the server read the demo configstrings and baselines
and spit back to the client like normal
water wading sounds
sound streaming option
remove cl_mapentity
sync camera pain motion to sound length?
give all sounds a max volume area isntead of instant diminish
quakeworld style shotgun handling

oct 5
* !!! autolooped entity sounds !!!

make all tools into 5.0 projects
combine SZ_ and MSG_
allways mkdir gamedir?
pause dumps packets?
clear all background all the time flag
player physics
MD4 each map file?
print version number on console bottom
select a different cd track if all goals accomplished
get rid of alphalight

oct 6
* larger bsp token length

goal sound, secret sound, help sound
remove pushmatrix/popmatrix
less shademodels
sound mixaheads
flies should be a sound field
trinity: two pass texture checking to avoid thrashing?
lose links and $ macro expansio?
lose Com_SetServerState
blinking flags on huds (blink F1)
!!! rotating object view changes not in yaw !!!
save configstrings in level to get lightstyles
better armor feedback
armor sounnd?

oct 7
* pitch snap clamping
* clamp at 89
* kill sounds when loading plaque
* no fov or null pic problem during prep refresh
* wounded faces

die then bring down console over menu bug
windows key
make sprite files text format?
all explodables and breakables should be pre-broken in dm
flash stats on change?
send email to j sturges
inventory names
super tank skin
variable delay on centerprint
demo tests
flies as entity sound
release mouse when paused?
peak to peak view bobbing
counter items
infantry skins

menu sounds
secret sound
goal sound

sound when low on health?
respawn muzzle flash event still wrong
falling damage
rotating sky in software
color 0 on NT
transparent water insides

oct 8
* fixed entity numbers 512-1024
* combined baselines and oldorgs
* demos working again
* MAX_SFX bug

spawn invisible allways starts at 256, so visible get bytes?
check goal counting
any key puts away help?
muzzle flashes
make cl_entities dynamic?
removed mergedemo on client
remove all client demo playback
finish savegame / loadgame UI
finish cinematic sound
check demo fopen spawning for cddir

weird palette issues?
more red

minimum health
infantry muzzle sounds?
remove blaster hit flashes on flesh
no savegame when dead
mine2 dissapearing problems
use key problems

qrad: infinite styles on face

allways have two secrets
allways select new items
do demos need a precache command?

!!!save lightstyles in savegame!!!
!!!save areaportal state in savegame!!!

are loadgames doing 10 second prerun?

move say and say_team into game
sentity_t gentity_t
more blood

areaportals in software - bad sort keys?

oct 9
* removed MAX_PACKET_ENTITIES limit
* used areas for beam culling
* centerprint to non client not error
* don’t rotate roll when pushed by entities
* areaportal fragments in software
* F_CLIENT fix
* KEY_ANY fix
* save areaportals
* save lightstyles
* fixed secret double counting
* up as jump

color 0 on NT
water wading speed
water jump out

no savegame when dead
mroe damage blend
putclient in server shouldn’t reference weaponmodel
userinfo issues
IP cvar for servers
IP userinfo for clients
remove sv.viewpos?
make max_entitites a noset cvar
don’t use PHS?
up / down issues
broadcast centerprint
flickery lights
free mouse when paused


New Culling Mechanism for Quake 2

Filed under: — johnc @ 1:13 am

A significant new feature for map development sneaked into Quake 2 this week.

It has allways been a problem with Quake that putting a door in front of a complex area didn’t make the scene run any faster, unlike DOOM. In glquake, it actually made it significantly slower as you aproached the door, due to overdraw.

There was also the related problem that monsters heard sounds through doors even if they were closed.

This was because the primary culling mechanism used by Quake is the PVS – the Potentially Visible Set. It only knew about anything that you could POTENTIALLY see from your current (rough) position. If a door might open, the PVS would allways contain everything that you could see even if the door was currently closed.

Quake 2 now has a way to allow you to lop off large amounts of the map irrespective of the PVS.

A map is now divided into “areas” by “areaportal” entities, usually in door frames.

If the area behind a door is not reachable by any open areaportals, then nothing from that area will be visible or hearable. This helps both rendering speed and network bandwidth. It also give the level designer an easy “band-aid” when they have designed an area that is too slow.

Note that the area-reachable test is strictly a topological flood fill, so if there is ANY route to the other side of a door open, you will still be processing the area behind the door, even if there is no real way you could see through the available route.

If your level has a reasonable number of doors, it will often run at a fair speed without any PVS information at all.

To use this feature, you create a thin “func_areaportal” entity that hides completely inside the door, then target the door at it. Qbsp3 does a bunch of work behind your back that you really don’t want to know about. Doors have special logic in the game to open or close the areaportal at the apropriate time.

I chose not to make it an automatic feature of doors for a few reasons:

1) Teamed double or quad doors would not create a single portal across the entire doorway.

2) The areaportal entity can also be used for things like exploding walls. You can even put one just around a corner and trigger it with a field, but it is usually better to just let the PVS take care of corner bends.

3) Complex doors would have created complex (but invisible) area portal brushes, which would have messed up the bsp a bit.

I think this was the very last data file change for quake II, so here is the current external files header for the curious: (4 character tabs)

// qfiles.h: quake file formats
// This file must be identical in the quake and utils directories


.MD2 triangle model file format


#define IDALIASHEADER ((’2′< <24)+('P'<<16)+('D'<<8)+'I')

#define MAX_TRIANGLES 4096
#define MAX_VERTS 2048
#define MAX_FRAMES 512
#define MAX_MD2SKINS 32
#define MAX_SKINNAME 64

typedef struct
short s;
short t;
} dstvert_t;

typedef struct
short index_xyz[3];
short index_st[3];
} dtriangle_t;

typedef struct
byte v[3]; // scaled byte to fit in frame mins/maxs
byte lightnormalindex;
} dtrivertx_t;

#define DTRIVERTX_V0 0
#define DTRIVERTX_V1 1
#define DTRIVERTX_V2 2

typedef struct
float scale[3]; // multiply byte verts by this
float translate[3]; // then add this
char name[16]; // frame name from grabbing
dtrivertx_t verts[1]; // variable sized
} daliasframe_t;

// the glcmd format:
// a positive integer starts a tristrip command, followed by that many
// vertex structures.
// a negative integer starts a trifan command, followed by -x vertexes
// a zero indicates the end of the command list.
// a vertex consists of a floating point s, a floating point t,
// and an integer vertex index.

typedef struct
int ident;
int version;

int skinwidth;
int skinheight;
int framesize; // byte size of each frame

int num_skins;
int num_xyz;
int num_st; // greater than num_xyz for seams
int num_tris;
int num_glcmds; // dwords in strip/fan command list
int num_frames;

int ofs_skins; // each skin is a MAX_SKINNAME string
int ofs_st; // byte offset from start for stverts
int ofs_tris; // offset for dtriangles
int ofs_frames; // offset for first frame
int ofs_glcmds;
int ofs_end; // end of file

} dmdl_t;


.SP2 sprite file format


#define IDSPRITEHEADER ((’2′<<24)+('S'<<16)+('D'<<8)+'I')
// little-endian “IDS2″

typedef struct
int width, height;
int origin_x, origin_y; // raster coordinates inside pic
char name[MAX_SKINNAME]; // name of pcx file
} dsprframe_t;

typedef struct {
int ident;
int version;
int numframes;
dsprframe_t frames[1]; // variable sized
} dsprite_t;


.WAL texture file format


#define MIPLEVELS 4
typedef struct miptex_s
char name[32];
unsigned width, height;
unsigned offsets[MIPLEVELS]; // four mip maps stored
char animname[32]; // next frame in animation chain
int flags;
int contents;
int value;
} miptex_t;


.BSP file format


#define IDBSPHEADER ((’P’<<24)+('S'<<16)+('B'<<8)+'I')
// little-endian “IBSP”

#define BSPVERSION 38

// upper design bounds
// leaffaces, leafbrushes, planes, and verts are still bounded by
// 16 bit short limits
#define MAX_MAP_MODELS 1024
#define MAX_MAP_BRUSHES 8192
#define MAX_MAP_ENTITIES 2048
#define MAX_MAP_ENTSTRING 0x20000
#define MAX_MAP_TEXINFO 8192

#define MAX_MAP_AREAS 256
#define MAX_MAP_PLANES 65536
#define MAX_MAP_NODES 65536
#define MAX_MAP_BRUSHSIDES 65536
#define MAX_MAP_LEAFS 65536
#define MAX_MAP_VERTS 65536
#define MAX_MAP_FACES 65536
#define MAX_MAP_LEAFFACES 65536
#define MAX_MAP_PORTALS 65536
#define MAX_MAP_EDGES 128000
#define MAX_MAP_SURFEDGES 256000
#define MAX_MAP_LIGHTING 0x200000
#define MAX_MAP_VISIBILITY 0x100000

// key / value pair sizes

#define MAX_KEY 32
#define MAX_VALUE 1024


typedef struct
int fileofs, filelen;
} lump_t;

#define LUMP_PLANES 1
#define LUMP_NODES 4
#define LUMP_TEXINFO 5
#define LUMP_FACES 6
#define LUMP_LEAFS 8
#define LUMP_EDGES 11
#define LUMP_MODELS 13
#define LUMP_BRUSHES 14
#define LUMP_POP 16
#define LUMP_AREAS 17
#define HEADER_LUMPS 19

typedef struct
int ident;
int version;
lump_t lumps[HEADER_LUMPS];
} dheader_t;

typedef struct
float mins[3], maxs[3];
float origin[3]; // for sounds or lights
int headnode;
int firstface, numfaces; // submodels just draw faces
// without walking the bsp tree
} dmodel_t;

typedef struct
float point[3];
} dvertex_t;

// 0-2 are axial planes
#define PLANE_X 0
#define PLANE_Y 1
#define PLANE_Z 2

// 3-5 are non-axial planes snapped to the nearest
#define PLANE_ANYX 3
#define PLANE_ANYY 4
#define PLANE_ANYZ 5

// planes (x&~1) and (x&~1)+1 are allways opposites

typedef struct
float normal[3];
float dist;
int type; // PLANE_X - PLANE_ANYZ ?remove? trivial to regenerate
} dplane_t;

// contents flags are seperate bits
// a given brush can contribute multiple content bits
// multiple brushes can be in a single leaf

// lower bits are stronger, and will eat weaker brushes completely
#define CONTENTS_SOLID 1 // an eye is never valid in a solid
#define CONTENTS_WINDOW 2 // translucent, but not watery
#define CONTENTS_AUX 4
#define CONTENTS_MIST 64

// remaining contents are non-visible, and don’t eat brushes



// currents can be added to any other contents, and may be mixed
#define CONTENTS_CURRENT_0 0x40000
#define CONTENTS_CURRENT_90 0x80000
#define CONTENTS_CURRENT_180 0x100000
#define CONTENTS_CURRENT_270 0x200000
#define CONTENTS_CURRENT_UP 0x400000
#define CONTENTS_CURRENT_DOWN 0x800000

#define CONTENTS_ORIGIN 0x1000000 // removed before bsping an entity

#define CONTENTS_MONSTER 0x2000000 // should never be on a brush, only in game
#define CONTENTS_DEADMONSTER 0x4000000
#define CONTENTS_DETAIL 0x8000000 // brushes to be added after vis leafs
#define CONTENTS_TRANSLUCENT 0x10000000 // auto set if any surface has trans
#define CONTENTS_LADDER 0x20000000

typedef struct
int planenum;
int children[2]; // negative numbers are -(leafs+1), not nodes
short mins[3]; // for frustom culling
short maxs[3];
unsigned short firstface;
unsigned short numfaces; // counting both sides
} dnode_t;

typedef struct texinfo_s
float vecs[2][4]; // [s/t][xyz offset]
int flags; // miptex flags + overrides
int value; // light emission, etc
char texture[32]; // texture name (textures/*.wal)
int nexttexinfo; // for animations, -1 = end of chain
} texinfo_t;

#define SURF_LIGHT 0x1 // value will hold the light strength

#define SURF_SLICK 0x2 // effects game physics

#define SURF_SKY 0x4 // don’t draw, but add to skybox
#define SURF_WARP 0x8 // turbulent water warp
#define SURF_TRANS33 0x10
#define SURF_TRANS66 0x20
#define SURF_FLOWING 0x40 // scroll towards angle
#define SURF_NODRAW 0x80 // don’t bother referencing the texture

// note that edge 0 is never used, because negative edge nums are used for
// counterclockwise use of the edge in a face
typedef struct
unsigned short v[2]; // vertex numbers
} dedge_t;

typedef struct
unsigned short planenum;
short side;

int firstedge; // we must support > 64k edges
short numedges;
short texinfo;

// lighting info
byte styles[MAXLIGHTMAPS];
int lightofs; // start of [numstyles*surfsize] samples
} dface_t;

typedef struct
int contents; // OR of all brushes (not needed?)

short cluster;
short area;

short mins[3]; // for frustum culling
short maxs[3];

unsigned short firstleafface;
unsigned short numleaffaces;

unsigned short firstleafbrush;
unsigned short numleafbrushes;
} dleaf_t;

typedef struct
unsigned short planenum; // facing out of the leaf
short texinfo;
} dbrushside_t;

typedef struct
int firstside;
int numsides;
int contents;
} dbrush_t;

#define ANGLE_UP -1
#define ANGLE_DOWN -2

// the visibility lump consists of a header with a count, then
// byte offsets for the PVS and PHS of each cluster, then the raw
// compressed bit vectors
#define DVIS_PVS 0
#define DVIS_PHS 1
typedef struct
int numclusters;
int bitofs[8][2]; // bitofs[numclusters][2]
} dvis_t;

// each area has a list of portals that lead into other areas
// when portals are closed, other areas may not be visible or
// hearable even if the vis info says that it should be
typedef struct
int portalnum;
int otherarea;
} dareaportal_t;

typedef struct
int numareaportals;
int firstareaportal;
} darea_t;


No Attacks on Competition

Filed under: — johnc @ 11:39 am

I want to apologize for some of the posturing that has taken place in .plan files.

I have asked that attacks on our competition no longer apear in .plan files here. I don’t think it is proper or dignified.

If everyone clearly understood that an individual’s opinion is only that – the opinion of a single individual, I wouldn’t have bothered. Unfortunately, opinions tend to be spread over the entire group, and I am not confortable with how this makes me perceived.

Building up animosity between developers is not a very worthwhile thing.

A little chest-beating doesn’t really hurt anything, but putting down other developers has negative consequences.

I think that we have a track record that we can be proud of here at id, but we are far from perfect, and I would prefer to cast no stones.

The user community often exerts a lot of pressure towards confrontation, though. People like to pick a “side", and there are plenty of people interested in fighting over it. There are a lot of people that dislike id software for no reason other than they have chosen another “side". I don’t want to encourage that.

Magazine articles are usually the trigger for someone getting upset here. Its annoying to have something you are directly involved in misrepresented in some way for all the world to see. However, I have been misquoted enough by the press to make me assume that many inflamatory comments are taken out of context or otherwise massaged. It makes a good story, after all.

Sure, there ARE developers that really do think they are going to wipe us off the face of the earth with their next product, and don’t mind telling everyone all about it. Its allways possible. They can give it their best shot, and we’ll give it ours. If they do anything better, we’ll learn from it.


DOOM Source Code Update

Filed under: — johnc @ 11:28 am

I get asked about the DOOM source code every once in a while, so here is a full status update:

The Wolfenstein code wasn’t much of a service to release – it was 16 bit dos code, and there wasn’t much you could do with it. Hell, I don’t think it even compiled as released.

The DOOM code should be a lot more interesting. It is better written, 32 bit, and portable. There are several interesting projects that immediately present themselves for working with the code. GLDOOM and a packet server based internet DOOM spring to mind. Even a client/server based DOOM server wouldn’t be too hard to do.

I originally intended to just dump the code on the net quite some time ago, but Bernd Kreimeier offered to write a book to explain the way the game works. There have been a ton of issues holding it up, but that is still the plan. If things aren’t worked out by the end of the year, I will just release things in a raw form, though.

My best case situation would be to release code that cleanly builds for win32 and linux. Bernd is doing some cleanup on the code, and some of the Ritual guys may lend a hand.

One of the big issues is that we used someone else’s sound code in dos DOOM (ohmygod was that a big mistake!), so we can’t just release the full code directory. We will probably build something off of the quake sound code for the release.

I think I am going to be able to get away with just making all the code public domain. No license, no copyleft, nothing. If you apreciate it, try to get a pirate or two to buy some of our stuff legit…



Filed under: — johnc @ 11:27 am

I went to siggraph last monday to give a talk about realtime graphics for entertainment.

The only real reason I agreed to the talk (I have turned down all other offers in the past) was because Shigeru Miyamoto was supposed to be on the panel representing console software. Id software was really conceived when me, Tom, and Romero made a Super Mario 3 clone after I figured out how to do smooth scrolling EGA games. We actually sent it to nintendo to see if they wanted to publish a PC game, but the interest wasn’t there. We wound up doing the Commander Keen games for Apogee instead, and the rest is history.

I was looking forward to meeting Mr. Miyamoto, but he wound up canceling at the last minute. :(

Oh well. I hope everyone that went enjoyed my talk. All the other speakers had powerpoint presentations and detailed discussion plans, but I just rambled for an hour…

I notced that there was a report about my discussion of model level of detail that was in error. I have an experimental harness, an algorithm, and a data structure for doing progressive mesh style LOD rendereing in the quake engine, but I suspect it won’t make it into the production Quake 2. Other things are higher priority for us. I may assist some of the quake licensees if they want to pursue it later.

A couple data / feature changes going into the latest (and I hope final) revision of the Quake bsp file format:

Back in my update a month ago where I discussed losing automatic frame animation in models to clean up the format and logic, I mentioned that I still supported automatic texture animation.

Not anymore. There were several obnoxious internal details to dealing with it, especially now with textures outside the bsp file, so I changed the aproach.

When a texture is grabbed, you can now specify another texture name as the next animation in a chain. Much better than the implicit-by-name specification form Quake1.

No animation is automatic now. A bmodel’s frame number determines how far along the animation chain to go to find the frame. Textures without animation chains just stay in the original frame.

There is a slight cost in network traffic required to update frame numbers on otherwise unmoving objects, but due to the QuakeWorld style delta compression it is still less thatn a Quake 1 scene with no motion at all.

The benefit, aside from internal code cleanliness, is that a game can precisely control any sequence of animation on a surface. You could have cycles that go forward and backwards through a sequence, you could make slide projectors that only change on specific inputs, etc.

You could not independantly animate two sides of a bmodel that were not syncronized with the same number of frames, but you could allways split it into multiple models if your really needed to.

Everything is simple when its done, but I actually agonized over animation specification for HOURS yesterday…

The last significant thing that I am working on in the map format is leaf clustering for vis operations. You can specify some map brushes as “detail” brushes, and others as “structural” brushes. The BSP and portal list is built for just the structural brushes, then the detail brushes are filtered in later.

This saves a bit of space, but is primarily for allowing complex levels to vis in a reasonable amount of time. The vis operation is very sensitive to complexity in open areas, and usually has an exponentially bad falloff time. Most of the complexity is in the form of small brushes that never really occlude anything. A box room with ten torch holders on the walls would consist of several dozen mostly open leafs. If the torch holders were made detail brushes, the room would just be a single leaf.

A detail / structural seperation is also, I believe, key to making a portal renderer workable. I had a version of Quake that used portals at the convex volume level, and the performance characteristics had considerably worse-than-linear falloff with complexity. By reducing the leaf count considerably, it probably becomes very workable. I will certainly be reevaluating it for trinity.


Ultra-Large Servers

Filed under: — johnc @ 11:26 am

quake2 +set maxclients 200


The stage is set for ultra-large servers. Imagine everyone at QuakeCon in one gigantic level! A single T1 could run 80 internet players if it wasn’t doing anything else, a switched ethernet should be able to run as many as we are ever likely to have together in one place.

There will be a number of issues that will need to be resolved when this becomes a reality, but the fundamentals are there.

There will probably be issues with UDP packet dropping at the ethernet card level that will need to be worked around with a seperate qued thread.

Quake 2 isn’t as cpu intensive as QuakeWorld, but I’m not sure even a Pentium-II 300 could run 200 users. An alpha 21264 could certainly deal with it, though.

The new .bsp format has greatly increased size limits, but you could still make a map that hits them. The first one to be hit will probably be 64k brush sides. Ten thousand brushes can make a really big level if you don’t make it incredibly detailed. Loading a monster map like that will probably take over a minute, and require 32+ megs of ram.

I should probably make an option for death messages to only be multicast to people that are in the potentially hearable set, otherwise death messages would dominate the bandwidth.

Everyone should start thinking about interesting rules for huge games. A QuakeArmies dll has immense potential. Enemy lines, conquering teritory, multiple clan bases, etc.

Cooperating servers will be possible with modified dlls, but I probably won’t include any specific code for it in the default game.dll.


Drag Strip

Filed under: — johnc @ 11:25 am

Id Software went to the drag strip today. The 100 degree heat was pretty opressive, and my NOS regulator wasn’t working, but a good time was had by all.

I made six runs in the 126 to 133 mph range and didn’t even burn a spark plug, which is a nice change from a couple road track events I have been to.

Best times for everyone:

Bob Norwood’s PCA race car: 10.9 / 133 mph (slicks)
My turbo testarossa 12.1 / 132
Adrian’s viper 13.5 / 105
Todd’s ‘vette 13.9 / 101
Tim’s porsche 14.3 / 96
Bear’s supra: 14.4 / 96
Cash’s M3 15.2 / 94

My TR is never going to be a good drag car (>4000 lbs!), but when we go back on a cool day this fall and I get my NOS running, it should be good for over 140 in the quarter. 50 mph to 200 mph is it’s real sweet spot.

I think Bear is heading for the chip dealer so he can get ahead of Tim :)


Fred Brooks “The Mythical Man-Month”

Filed under: — johnc @ 11:23 am

Zoid commented that my last .plan update sounded like Fred Brooks “The Mythical Man-Month". He is certainly correct.

When I read TMMM two years ago, I was stunned by how true and relevent it was. I have something of a prejudice against older computer books – I think “If its more than a five years old, it can’t be very relevent” (sure, thats not too rational, but what prejudice is?).

Then I go and read this book that is TWENTY YEARS old, that talks about experience gained IN THE SIXTIES, and I find it mirroring (and often crystalizing) my thoughts on development as my experiences have taught me.

It even got me fired up about documenting my work. For about a day :)

I had to fly out to CA for biz on thursday, so I decided to grab and re-read TMMM on the plane.

It was just as good the second time through, and two more years of development under my belt hasn’t changed any of my opinions about the contents.

If you program (or even work around software development), you should read this book.


Quake’s Software Quality

Filed under: — johnc @ 11:22 am

The quality of Quake’s software has been a topic of some discussion lately. I avoid IRC like the plague, but I usually hear about the big issues.

Quake has bugs. I freely acknowledge it, and I regret them. However, Quake 1 is no longer being actively developed, and any remaining bugs are unlikely to be fixed. We would still like to be aware of all the problems, so we can try to avoid them in Quake 2.

At last year’s #quakecon, there was talk about setting up a bug list maintained by a member of the user community. That would have been great. Maybe it will happen for Quake 2.

The idea of some cover up or active deception regarding software quality is insulting.

To state my life .plan in a single sentance: “I want to write the best software I can". There isn’t even a close second place. My judgement and my work are up for open criticism (I welcome insightfull commentary), but I do get offended when ulterior motives are implied.

Some cynical people think that every activity must revolve around the mighty dollar, and anyone saying otherwise is just attempting to delude the public. I will probably never be able to convince them that isn’t allways the case, but I do have the satisfaction of knowing that I live in a less dingy world than they do.

I want bug free software. I also want software that runs at infinite speed, takes no bandwidth, is flexible enough to do anything, and was finished yesterday.

Every day I make decisions to let something stand and move on, rather than continuing until it is “perfect". Often, I really WANT to keep working on it, but other things have risen to the top of the priority que, and demand my attention.

“Good software” is a complex metric of many, many dimensions. There are sweet spots of functionality, quality, efficiancy and timeliness that I aim for, but fundamentally YOU CAN’T HAVE EVERYTHING.

A common thought is that if we just hired more programmers, we could make the product “better".

It’s possible we aren’t at our exactly optimal team size, but I’m pretty confidant we are close.

For any given project, there is some team size beyond which adding more people will actually cause things to take LONGER. This is due to loss of efficiency from chopping up problems, communication overhead, and just plain entropy. It’s even easier to reduce quality by adding people.

I contend that the max programming team size for Id is very small.

For instance, sometimes I need to make a change in the editor, the utilities, and the game all at once to get a new feature in. If we had the task split up among three seperate programmers, it would take FAR longer to go through a few new revs to debug a feature. As it is, I just go do it all myself. I originated all the code in every aspect of the project, so I have a global scope of knowledge that just wouldn’t be possible with an army of programmers dicing up the problems. One global insight is worth a half dozen local ones.

Cash and Brian assist me quite a lot, but there is a definite, very small, limit to how many assistants are worthwhile. I think we are pretty close to optimal with the current team.

In the end, things will be done when the are done, and they should be pretty good. :)

A related topic from recent experience:

Anatomy of a mis-feature
As anyone who has ever disected it knows, Quake’s triangle model format is a mess. Any time during Quake’s development that I had to go back and work with it, I allways walked over to Michael and said “Ohmygod I hate our model format!’. I didn’t have time to change it, though. After quake’s release, I WANTED to change it, especially when I was doing glquake, but we were then the proud owners of a legacy data situation.

The principle reason for the mess is a feature.

Automatic animation is a feature that I trace all the way back to our side-scroller days, when we wanted simple ways to get tile graphics to automatically cycle through animations without having to programatically each object through its frames.

I thought, “Hmm. That should be a great feature for Quake, because it will allow more motion without any network bandwidth.”

So, we added groups of frames and groups of skins, and a couple ways to control the timing and syncronization. It all works as designed, but parsing the file format and determining the current frames was gross.

In the end, we only used auto-frame-animation for torches, and we didn’t use auto-skin-animation at all (Rogue did in mission pak 2, though).

Ah well, someone might use the feature for something, and its allready finished, so no harm done, right?

Wrong. There are a half dozen or so good features that are apropriate to add to the triangle models in a quake technology framework, but the couple times that I started doing the research for some of them, I allways balked at having to work with the existing model format.

The addition of a feature early on caused other (more important) features to not be developed.

Well, me have a new model format for Quake 2 now. Its a ton simpler, manages more bits of precision, includes the gl data, and is easy to extend for a couple new features I am considering. It doesn’t have auto-animation.

This seems like an easy case – almost anyone would ditch auto-animation for, say, mesh level of detail, or multi-part models. The important point is that the cost of adding a feature isn’t just the time it takes to code it. The cost also includes the addition of an obsticle to future expansion.

Sure, any given feature list can be implemented, given enough coding time. But in addition to coming out late, you will usually wind up with a codebase that is so fragile that new ideas that should be dead-simple wind up taking longer and longer to work into the tangled existing web.

The trick is to pick the features that don’t fight each other. The problem is that the feature that you pass on will allways be SOMEONE’s pet feature, and they will think you are cruel and uncaring, and say nasty things about you.


Sometimes the decisions are REALLY hard, like making head to head modem play suffer to enable persistant internet servers.


D3D vs. OpenGl

Filed under: — johnc @ 11:21 am

This little note was issued to a lot of magazines by microsoft recently. Just for the record, they have NOT contacted us about any meetings.

All the various dramas in this skit haven’t quite settled down, but it looks like microsoft is going to consciously do The Wrong Thing, because of politcal issues. Sigh.

Our goal was to get the NT OpenGL MCD driver model released for win-95, so IHVs could easily make robust, high performance, fully compliant OpenGL implementations. Microsoft has squashed that. Flushed their own (good) work down the toilet.

The two remaining options are to have vendors create full ICD opengl implementations, or game specific mini-drivers.

Full ICD drivers are provided by intergraph, 3dlabs, real3d, and others, and can run on both NT and 95 (with code changes). Microsoft still supports this, and any vendor can create one, but it is a lot of work to get the provided ICD code up to par, and bug prone. On the plus side, non-game tools like level editors can take full advantage of them.

Minidrivers certainly work fine – we have functional ones for 3dfx and powerVR, and they have the possibility of providing slightly better performance than fully compliant drivers, but partial implementations are going to cause problems in the future.

We will see some of both types of drivers over the next year, and Quake 2 should work fine with either. We also intend to have Quake 2 show up on several unix systems that supports OpenGL, and I still hope that rhapsody will include OpenGL support (we’ll probably port a mini-drivers if we can’t get real support).

Once again, we won’t be idiotic and crusade off a cliff, but we don’t have to blindly follow microsoft every time they make a bad call.

Subject: Microsoft D3D vs. OpenGL
Author: Julie Whitehead at Internet
Date: 6/23/97 10:01 AM

Dear Editor,
You may be aware of a press release that was issued On June 12, by Chris
Hecker, former MS employee and developer of D3D [sic]. The statement asks
Microsoft to develop a stonger link between D3D and OGL.The press release,
was signed by several game developers representing the top tier 3-D game
developers. Microsoft is dedicated to maintaining an active relationship
with its DirectX developers. In response to this request Microsoft will host
the developers included in the statement at a developers roundtable in July.
The purpose of the roundtable is to openly consolidate input and feedback
from developers. Tentative date for the roundtable is immediately following
Meltdown, July 18.

Direct3D is Microsoft’s recommended API for game developers with more than
100 developers using Direct3D as the defacto consumer API. OGL is widely
regarded as a professional API designed for high precision
applications such as CAD, CAM, etc. Our hope is that this round table
will provide Microsoft with the feedback required to evolve our 3D APIs
in a way that delivers the best platform for our developers.

If you have any questions or wish to speak with a Microsoft
spokesperson, please let me know.

Julie Whitehead


New Processors Running

Filed under: — johnc @ 11:20 am

We got the new processors running in our big compute server today. We are now running 16 180mhz r10000 processors in an origin2000. Six months ago, that would have been on the list of the top 500 supercomputing systems in the world. I bet they weren’t expecting many game companies. :)

Some comparative timings (in seconds):

mips = 180 mhz R10000, 1meg secondary cache
intel = 200 mhz ppro, 512k secondary cache
alpha = 433 mhz 21164a, 2meg secondary cache

qvis3 on cashspace:

cpus mips intel alpha
—- —- —- —-
1 608 905 470
2 309 459
3 208 308
4 158 233
8 81
12 57
16 43

(14 to 1 scalability on 16 cpus, and that’s including the IO!)

The timings vary somewhat on other tools – qrad3 stresses the main memory a lot harder, and the intel system doesn’t scale as well, but I have found these times to be fairly representative. Alpha is almost twice as fast as intel, and mips is in between.

None of these processors are absolutely top of the line – you can get 195 mhz r10k with 4meg L2, 300 mhz PII, and 600 mhz 21164a. Because my codes are highly scalable, we were better off buing more processors at a lower price, rather than the absolute fastest available.

Some comments on the cost of speed:

A 4 cpu pentium pro with plenty of memory can be had for around $20k from bargain integrators. Most of our Quake licensees have one of these.

For about $60k you can get a 4 cpu, 466 mhz alphaserver 4100. Ion Storm has one of these, and it is twice as fast as a quad intel, and a bit faster than six of our mips processors.

That level of performance is where you run into a wall in terms of cost.

To go beyond that with intel processors, you need to go to one of the “enterprise” systems from sequent, data general, ncr, tandem, etc. There are several 8 and 16 processor systems available, and the NUMA systems from sequent and DG theoretically scale to very large numbers of CPUS (32+). The prices are totally fucked. Up to $40k PER CPU! Absolutely stupid.

The only larger alpha systems are the 8200/8400 series from dec, which go up to 12 processors at around $30k per cpu. We almost bought an 8400 over a year ago when there was talk of being able to run NT on it.

Other options are the high end sun servers (but sparc’s aren’t much faster than intel) and the convex/hp systems (which wasn’t shipping when we purchased).

We settled on the SGI origin systems because it ran my codes well, is scalable to very large numbers of processors (128), and the cost was only about $20k per cpu. We can also add Infinite Reality graphics systems if we want to.

Within a couple years, I’m sure that someone will make a plug-in SCI board for intel systems, and you will be able to cobble together NUMA systems for under $10k a cpu, but right now the SGI is the most suitable thing for us.

I have been asked a few times if Quake will ever use multiple processors. You can allways run a dedicated server on one cpu and connect to it to gain some benefit, but that’s not very convenient, doesn’t help much, and is useless for internet play.

It’s waaaay down on the priority list, but I have a cool scheme that would let me make multiple copies of the software rendering dll and frame pipeline the renderers. Response is cut by half and the frame rate would double for two cpus, but pipelining more than a frame would be a bad idea (you would get lag on your own system).

I wouldn’t count on it, but some day I might take a break from serious work and hack it in.

There is no convenient way to use multiple processors with the hardware accelerated versions, accept to run the server on a seperate cpu.

That will probably be an issue that needs to be addressed in the lifespan of the next generation technology. Eventually people are going to start sticking multiple cpus (or multiple thread issue systems sharing resources) on a single chip, and it will become a consumer level item. I’m looking forward to it.


Updating .plans at the Top like Everyone Else

Filed under: — johnc @ 11:18 am

Ok, I’m finally updating my .plans at the top like everyone else…

E3 was interesting, and the tournement went extremely well.

You would think that the top deathmatchers would be an evenly matched group, seperated by mere milliseconds of response time, and the matches would be close.

Its not like that at all. There are master players. And there is Thresh.

We were watching him play with our jaws hanging open. I don’t think he was killed a single time in the finals. He did things we had never seen before. It was amazing to watch.

I feel a lot better about the contest now, because even if the sixteen finalists weren’t necessarily the sixteen best players due to internet issues, I do think that the grand prize winner IS the best single player.

The level of sportsmanship was gratifying, especially given the stakes. No sore losers, no tantrums. Everyone was cool.

After the finals, a japanese champion (highroller) asked for a match with Thresh. I expected him to pass, considering the pressure of the tournement, but he happily accepted, and delivered an eighty-something to negative-three beating (which was accepted with good grace).

I don’t see much point to any more big tournements until a few more of these mutant superpowered deathmatchers show up…

As far as everything else at E3 goes, I saw a bunch of good looking games, but I am fairly confidant of two things:

Nobody is going to eclipse Quake 2 this christmas. Different tradeoffs are being made that will appeal to different people, and there are going to be other products that are at least playing in the same league, but Q2 should be at the top of the pile, at least by the standards we judge games. Several licensees will be picking up all the Q2 features for their early ‘98 products, so games should get even better then. (ok, I guess that is just my cautious, long-winded way of saying Q2 will rule…)

Some notable companies are going to ship longer after us than they are expecting to, or make severe compromises. I wouldn’t advise holding your breath waiting for the quoted release dates. Relax, and let the developers get things out in due time.

Ugh. I haven’t coded in three days. Withdrawal.


Pleased with Things

Filed under: — johnc @ 11:17 am

I’m pretty damn pleased with things right now.

We are just buttoning up the E3 demo stuff, and it looks really good. It is clearly way alpha meterial, but people should be able to project where it is going.

The timing is a bit inconvenient for us, because we still aren’t quite through with converting all the .qc work that Cash did over to straight C code in the new engine. The monsters are just barely functional enough to show, with none of the new behavior in. If E3 was a week or two later, the demos would almost be real playtesting.

Q2 is going to be far and away the highest quality product id has ever done. There are new engine features, but the strength of the product is going to be how everything is fitted together with great care. (don’t worry, next year will be radical new technology all over again)


Sound is being improved in a number of ways.

All source samples are 22 khz / 16 bit, and you can restart the sound system for different quality levels without exiting the game. high quality sound will require more memory than the base 16 meg system. The system can automatically convert to 11 khz / 8 bit sounds, but we are probably going to include a seperate directory with offline converted versions, which should be slightly higher quality. Homebrew paatches don’t need to bother.

Sounds can now travel with a moving object. No dopler effects, but it positions properly. (well, spatialization is a bit fucked this very instant, but not for long)

I finally got around to tracking down the little bug with looping sounds causing pops.

I have intentions to do three more things with the sound engine, but the realistic odds are that they won’t all make it in:

Voice over network. I definately don’t have time to do a super-compressed version, but I can probably hack something in that the T1 players would have fun with.

Radiosity sound solution. Its obvious in retrospect, but it was a “eureka!” thought for me when I realized that the same functions that govern the transport of light for radiosity also apply to sound. I have research plans for next-generation technology that include surface reflection spectrums and modeling the speed of sound waves, but I think I can get a simplified solution into Q2 to provide an ambient soundscape with the same level of detail as the lightmaps. I’m a little concerned about the memory footprint of it, but I’m going to give it a shot.

Syncronized, streaming sound from disk. Special events and movie demos won’t need to precache gigantic sounds, and they can rely on the timing.


Q2 has a generalized inventory structure and status display that should be adaptable to just about anything anyone wants to do in a TC.


On saturday, I give my 328 away at E3. I know that there were lots of issues with the contest, and to be honest, I probably wouldn’t have done the nationwide contest if I could have forseen all the hassle (I could have just given it away at #quakecon…), but the finals should still be really cool. It just wasn’t possible to make the contest “completely fair". Not possible at all. In any case, I don’t think anyone will deny that the finalists are some of the best quake players around.


I doubt I can convey just how well things are going here. Things probably look a little odd from the outside, but our work should speak for itself. I have been breaking into spontanious smiles lately just thinking about how cool things are (of course, that could just be a sleep deprivation effect…).

We have a totally kick-ass team here.

We are on schedule. (no shit!)

We are doing a great product.

Everyone watch out!


Bad News

Filed under: — johnc @ 11:16 am

Bad news.

It looks like this is when “unsupported” really becomes unsupported

Glquake and QuakeWorld were fun to do, but keeping the datasets compatable with quake 1 really has held me back a lot. I badly wanted to get one more release out, but circumstances have forced me to finally ireversibly break with compatability, and I just don’t have the time to devote any effort to a stagnant codebase. You probably wont see any more releases from Id until hexen 2 ships. Sorry.

I have given Zoid and Jack Mathews free license to extend and upgrade the QuakeWorld codebase from the last released revision, so this may actually mean that QW receives more attention than I was able to give it.

On the bright side, the new bsp format will offer some great new capabilities that will be apreciated by all:

Greater robustness. Only one bsp tree is built, and no surfaces are generated that weren’t part of the map brushes.

No fixed clipping hull restrictions. You can now set any mins/maxs you like.

You can tell the texture that a trace clips on in the game, so different surface attributes are now possible.

Textures are no longer stored in the bsp file.

Full color lightmaps for glquake. The “surprise” that I mentioned before was colored lighting hacked into glquake in a way that didn’t require a change in the format, but this is better.

If any hard-core add on hackers can present a serious case for additional modifications to the bsp file, now is the time to let me know.


Quake Port at Apple’s WWDC

Filed under: — johnc @ 11:15 am

As some of you may know, a port of Quake was demod at apple’s WWDC. Here is the full info:

A couple weeks ago, I got an email saying: “Hey! We heard you are porting quake for WWDC!".
I replied: “Uh, first I’ve heard of it… I was planning on supporting Quake 2 on it late this year…”

Well, I stole some time and went ahead and did it (mostly last weekend – running tight!). I’m quite happy with how it turned out, and I’m glad it made it for the demos.

It is actually a port of the current research QuakeWorld-merging-into-Quake2 codebase, so it only plays network games at the moment.

It is running through 24 bit display postscript, and doesn’t have the assembly language compiled in, so don’t believe anyone that says it was running faster than under windows. It was a fast demo system. There is a good chance that it will be a bit faster then win32 when I am done with it, because the direct-to-screen API doesn’t require all the locks and unlocks of Direct Draw, and the sound access will avoid the DirectSound penalties, but basically they should be the same.

98% of the support I need for games is present in rhapsody, and now that there is an existing game for it, the remaining decisions can be rationally guided.

I am still going to press the OpenGL issue, which is going to be crucial for future generations of games.

I am definately going to support Quake 2 on rhapsody. I may make a public release of the QuakeWorld demo, but I will probably wait until we get the full screen api working. Omnigroup has a little qspy-like openstep program that we can use with it.


Native Glide Port of Quake

Filed under: — johnc @ 11:14 am

I have gotten several emails speculating that there will now be a native glide port of quake. Here is the straight answer:

I have considered a glide port many times (especially now that the rendering code is in a dll), but I allways reach the conclusion that it wouldn’t be justified.

On the plus side, it could get a 10%-15% speedup over the OpenGL driver without going through too many contortions. Primarily by saving transforms for the lightmap pass and doing tightly packed vertex arrays for the enemy models.

The big drawback is that every codepath that gets added holds back future innovation. Just having software and gl is a lot of work, and I have allready commited to verite support. This is a difficult point for some people to understand, but it is crucially important. The more places I need to rewrite a feature, the less likely I am to put it in. If I only had the opengl version to worry about, Quake 2 would be so much cooler…


Brian Hook has been Hired

Filed under: — johnc @ 11:12 am

Brian Hook has been hired as our new programmer. Brian wrote the glide API for 3dfx, worked on CosmoGL, and wrote a book on 3d programming that he is now horribly embarrased about.


3drealms / Quake Deal

Filed under: — johnc @ 11:09 am

I’m sure you have all heard about the 3drealms / quake deal by now. It took a long time to get everything nailed down, but it should be worth it.

The “quake 2″ terminology is a little confusing. They have all the quake / glquake / quakeworld stuff right now, but they will be picking up the quake 2 codebase after we finish it.

I’m quite excited about this – it should be a very complimentary arrangement. We would never have done a game like Duke at id, but there are many valid styles of design that are mutually exclusive. Todd and the rest of the Duke team are hard working developers with a pretty clear vision of what they want. It happens to be different than our vision, but the market is plenty big enough for both of them.


Consolidated QuakeWorld Client

Filed under: — johnc @ 11:07 am

The consolidated QuakeWorld client has been working pretty well. I’ve been playing deathmatch with it in GL mode for the past week. There are still a number of things to do on it, and I haven’t been working on it for a while due to higher priority tasks. A lot of other non-graphics things have changed in the new architecture as well.

It is really cool to be able to switch between software and gl without even restarting the game. We will be testing Quake 2 extensively in GL and even doing some specific development for it. My current wild guess is that about 15% of quake 2 customers will run the OpenGL version (there will be several cards coming out this year that are fast enough, besides just 3dfx), so it is definately a factor in our decisions.

The verite renderer will still be supported in quake 2, but it won’t have the special features of glquake. (it will still have it’s own custom features like anti-aliasing, though)

There is a very cool new surprise feature for you in the next gl release :)


For the past several days I have been working on a new version of qbsp that should be dramatically more robust for “crazy” maps. Raven is aproaching completion on Hexen 2, and they have a couple problems in their maps, so this is my top priority.

I figured out something about CSG / BSP operations that had been kicking around in the back of my head for almost two years now. The seperate (and epsilon issue prone) CSG phase is not needed at all if you directly contruct a BSP tree from volumes instead of from polygons. I have that part working, but there is just so much work in the tools that getting the rest of the stuff working again is taking quite a lot of effort.

I will make another tool release when things calm down, but understandably that is about at the bottom of my priority list.


OpenGL Code no longer Scales the Status Bar and Console

Filed under: — johnc @ 11:06 am

Ok, the current OpenGL code no longer scales the status bar and console. You can stop complainng now. The next release will be the consolidated rendering code for quakeworld. I’m not sure when I will be able to make a standalone version.

The consolidated quake will also be available on NT-alpha as well as x86. If you have a powerstorm-T card, glquake works pretty good. Glint and oxygen cards don’t work well enough, but the normal quake software version should work fine. We may get a little bit of asm code written for the software version.


Second Generation QuakeWorld

Filed under: — johnc @ 11:06 am

The second generation QuakeWorld is out and live now. We will probably release a couple bug fix versions over the next week or so as problems are reported.

Overall, I’m pleased with the results – I think I have delivered very solid improvements in game play. I certainly learned a lot along the way. If you have anything resembling a decent connection, you should be able to play a good game. A server with a 400+ ms ping and 10% packet loss is still not going to play all that great, but you should just avoid those.

The packet size is about as small as it is going to get for the general cases. Any more shrinkage will have to be game-specific compression, like the specialized nail update.

I can make doors and plats move smoothly, but it will take a good chunk more development. This will probably be done for quake 2.

I have it all set up to clip movement against other players during prediction, but I probably need a day or two to finish it. I’m not confidant that I’ll get to that anytime soon, though.

I really want to get client side demo recording and more spectator mode options (see through player’s eyes, chase cam, etc), but I just don’t have the time right now.

The next major upgrade will be a quakeworld that can run in software and OpenGL modes. A verite module will come later.

This combination (QW networking and switchable rendering) will be the base that we move all of our Quake 2 work over to.


Damaged F40

Filed under: — johnc @ 11:05 am

Someone ran into my F40 in the parking lot, then took off. Words cannot do justice to how I feel right now.

If anyone knows a tall white male in the dallas area that now has red paint and carbon fibre on their tan pickup truck, turn the bastard in!

Michael Abrash is Working at Microsoft Again

Filed under: — johnc @ 11:04 am

Michael Abrash is working at microsoft again, due to external reasons. This is the only time anyone has ever left id that we aren’t better off without. :(

That does give me an excuse to visit seattle more often and pester the folks at microsft about various broken things…

Look for a hardcover compilation of nearly everything Michael has written later on this year. Michael and I are probably going to add some hindsight notes to many of the articles for the new edition.


N64 quake is looking really awesome. We got DM5 (the only level small enough to fit before we take a lot of space saving measures) running perfectly in only two weeks. It looks about like glquake with “picmip 1″, and it runs 30fps.

We are going to have transparent water in all the maps, and all the lights will have full color control, so it should look great.

We don’t know what maps we are going to use yet. There will probably be a combination of modified quake, level pack, and new maps. The biggest pain is the tiny size of the cartridge. I am going to implement some more space efficient file formats, and all the maps are going to have the non-essentials crunched out, but we are still not going to be able to fit as many on as I would like.


Work is progressing well with the new rendering architectures. I have a test harness that can dynamically switch between using a ref_soft.dll and ref_gl.dll for rendering in the same window. I have a lot of work to do before the entire game will run like that, and there may be some incompatabilities with normal quake, because this is aimed primarily for Quake 2.

The interface to the renderers is very cool – it only takes a single file of code to harness and exercise all the rendering features. If we actually release with seperate DLLs, people are going to link the refreshes into their own programs and use it as an object level rendering toolkit… You could write a Quake-like game in visual basic. There is a whole mess of biz type issues with that that I don’t even want to think about now.


Significant Amount of Response on the Quake 2 Extension Mechanism

Filed under: — johnc @ 11:03 am

I have gotten a significant amount of response on the Quake 2 extension mechanism. I do read everything that comes my way (I can’t respond to all of it, though), and I have learned a few things from the mail.

Nothing is set in stone yet, but it is still looking like a dll is going to be the primary interface. I have been seriously considering a java interface, but the tradeoffs (time spent implementing takes away from something else…) just don’t quite add up. Other options, like enhancing qc or using other languages like perl have very remote chances.

One of the primary reasons is that you can allways build UP – put more functionality on top of a dll, but you can’t allways build DOWN – accessing the registry from java for instance.

For Id Software to develop a game, a dll will be most efficient. We have more cpu power, and we can debug it more easily. We are directing significant effort towards making Quake 2 a better GAME, as well as just a better mutliplayer virtual world. Quake 1 was pretty messed up from a game standpoint, and we don’t plan on doing that again.

What I can offer the qc hacking crowd is a public release of the qc interface and interpreter code from Quake 1 when Quake 2 is released. The user community can then bolt things together so that there can be one publicly trusted DLL that executes an updated and modified qc language for portable, secure add ons.

I really do care about portability, but it is just one factor that needs to be balanced against all the others. Things just aren’t clear cut.

Speaking of portability, to remove the guesswork that goes on, here are my current opinions on the various platforms:

Win32 rules the world. You are sticking your head in the sand if you think otherwise. The upside is that windows really doesn’t suck nowdays. Win 95 / NT 4.0 are pretty decent systems for what they are targeted at. I currently develop mostly on NT, and Quake 2 will almost certainly be delivered on win32 first. Our games should run as well as possible in NT, we won’t require any ‘95 only features.

We are not going to do another dos game. No amount of flaming hate mail is going to change my mind on this (PLEASE don’t!). The advantages of good TCP/IP support, dynamic linking, powerfull virtual memory, device drivers, etc, are just too much to overcome. Yes, all of those can be provided under dos in various ways, but it just isn’t worth it.

I consider linux the second most important platform after win32 for id. From a biz standpoint it would be ludicrous to place it even on par with mac or os/2, but for our types of games that are designed to be hacked, linux has a big plus: the highest hacker to user ratio of any os. I don’t personally develop on linux, because I do my unixy things with NEXTSTEP, but I have a lot of technical respect for it.

From a money making standpoint, the only OS other than win32 that matters, and it doesn’t matter all that much. We have professional ports done to MacOS instead of unsupported hack ports, which is a mixed blessing. They come out a lot later (still waiting for quake…), but are more full featured. I have zero respect for the MacOS on a technical basis. They just stood still and let microsoft run right over them from waaay behind. I wouldn’t develop on it.

A native OS/2 port of any of our products is unlikely. We just don’t care enough, and we are unwilling to take time away from anything else.

I don’t particularly care for IRIX as a development environment (compared to NT or NEXTSTEP), but SGI has the coolest hardware to run GL apps on. Safe to assume future IRIX ports, but its not exactly a top priority.

I wouldn’t start a port to any of these, but if a trusted party (Zoid) wanted to do them, I probably wouldn’t object.

I bought a BeBox because I am a solid believer in SMP, and I like clean, from-scratch systems. I was left fairly non plussed by it. Yes, it is lean and mean and does a couple things better than any other OS I have seen, but I just don’t see any dramatic advantages to it over, say, NEXTSTEP. Lion (the company doing the mac quake port) has a BeOS port of quake sort of working, and have my full support in releasing it, but it will be strictly an act of charity on their part, so don’t expect too much.

I spent a few months running Plan9. It has an achingly elegent internal structure, but a user interface that has been asleep for the past decade. I had an older version of quake dedicated server running on it (don’t ask me for it – I lost it somewhere) and I was writing a civilized window manager for it in my spare time, but my spare time turned out to be only a couple hours a month, and it just got prioritized out of existance.

My faviorite environment. NT and linux both have advantages in some areas, but if they were on equal footing I would choose NEXTSTEP hands down. It has all the power of unix (there are lots of things I miss in NT), the best UI (IMHO, of cource), and it just makes sense on so many more levels than windows. Yes, you can make windows do anything you want to if you have enough time to beat on it, but you can come out of it feeling like you just walked through a sewer.

In the real world, things aren’t on equal footing, and I do most of my work on NT now. I hold out hope that it may not stay that way. If apple Does The Right Thing with rhapsody, I will be behind them as much as I can. NEXTSTEP needs a couple things to support games properly (video mode changing and low level sound access). If apple/next will provide them, I will personally port our current win32 products over.

If I can convince apple to do a good hardware accelerated OpenGL in rhapsody, I would be very likely to give my win NT machine the cold shoulder and do future development on rhapsody. (I really don’t need Quickdraw3D evangelists preaching to me right now, thank you)


Technical Issue to be Discussed

Filed under: — johnc @ 11:01 am

Here is a technical issue to be discussed:

I am strongly considering dropping qc in Quake 2 in favor of exporting most of the game logic to a seperate .dll file. This wasn’t an option when we had to support dos, but I think it is the correct choice now.

There are a lot of issues involved with this.

As everyone who has tried to do anything serious with qc knows, it has its limitations (ahem). I could improve the language, or just adopt a real language like java, but the simplest thing to do would be just use native code.

It would definately be more efficient as a dll. As we do more sophisticated game logic, efficiency becomes more and more important. For simple deathmatch modifications this wouldn’t be a big deal, but for full size game levels it will likely be at least a 5% to 10% overall speed improvement.

It would be non-portable. I am dreading the reaction to this from the linux community. Game modifications would have to be compiled seperately for each target architecture (windows, linux, irix, etc). I do still intend to have the client be generic to all mods (but more flexible than Q1), so it is really only an issue for servers.

There are security concerns. I suppose to a world that embraces Active-X, this isn’t really an issue, but binary code patches still spook me.

You would actually need a compiler to hack quake. For the serious people, this isn’t an issue, but it would cut out a number of people that currently enjoy hacking quake. I have a strange mixture of pride and shame when I think about the people that have actually started learning programming in my crappy little qc language.

You could debug your patch in a real debugger! Yipee!

Powered by WordPress