Limit the FPS of Armegatron Ad....

What do you want to see in Armagetron soon? Any new feature ideas? Let's ponder these ground breaking ideas...
Post Reply
irchs
Posts: 3
Joined: Sat May 05, 2007 6:15 pm

Limit the FPS of Armegatron Ad....

Post by irchs »

Hello,

A nice feature would be to limit the FPS of the client. I could then limit the client to 100 instead of it rendering 1000 FPS and me only seeing 1/20th of them.... save the fans in my iMac :)

Thanks

Jan
User avatar
Z-Man
God & Project Admin
Posts: 11717
Joined: Sun Jan 23, 2005 6:01 pm
Location: Cologne
Contact:

Post by Z-Man »

Why don't you just activate VSync or vertical sync in your driver settings? That worked for me :)
irchs
Posts: 3
Joined: Sat May 05, 2007 6:15 pm

Post by irchs »

ah, I suppose that is an idea, but I am on a Mac, and the driver settings are hidden away to stop me from injuring myself.

A setting would be nice... :)

Thanks

Jan
User avatar
Z-Man
God & Project Admin
Posts: 11717
Joined: Sun Jan 23, 2005 6:01 pm
Location: Cologne
Contact:

Post by Z-Man »

True, but the corresponding code is quite ugly and nonportable. It may get done some day, but don't hold your breath.

What graphics card do you have? For the ATI ones, you can install some control panel that lets you access the vsync setting quite nicely.
irchs
Posts: 3
Joined: Sat May 05, 2007 6:15 pm

Post by irchs »

aye, its an ATI :) I will investigate it further :)

Thanks for the help!

Jan
User avatar
dlh
Formerly That OS X Guy
Posts: 2035
Joined: Fri Jan 02, 2004 12:05 am
Contact:

Post by dlh »

If you have an ATI video card you can use their ATI Displays application to enable VSync.
User avatar
Jonathan
A Brave Victim
Posts: 3391
Joined: Thu Feb 03, 2005 12:50 am
Location: Not really lurking anymore

Post by Jonathan »

In case explicit vsync support will ever be added to Arma, this is how you can do it in OS X as soon as OpenGL is set up, even through SDL:

Code: Select all

#include <OpenGL/OpenGL.h> // OpenGL framework

// ...

CGLContextObj cglContext;
long vsync = 0 or 1;
cglContext = CGLGetCurrentContext(); // this is how you can get it if you didn't use CGL yourself to set it up in the first place
CGLSetParameter(cglContext, kCGLCPSwapInterval, &vsync);
Wait, it looks like SDL ≥1.2.10 can do it.
ˌɑrməˈɡɛˌtrɑn
User avatar
Z-Man
God & Project Admin
Posts: 11717
Joined: Sun Jan 23, 2005 6:01 pm
Location: Cologne
Contact:

Post by Z-Man »

Jonathan wrote:Wait, it looks like SDL ≥1.2.10 can do it.
Wee! Good find.
User avatar
Z-Man
God & Project Admin
Posts: 11717
Joined: Sun Jan 23, 2005 6:01 pm
Location: Cologne
Contact:

Post by Z-Man »

Seems like the NVidia linux driver does not support the extension SDL uses here, but an appropriate setenv() call works around that. The SDL implementation seems to be unable to <disable> waiting for vsync (setting the variable to 0 is a nop), it probably assumes it is off by default. No problem here, but hyphyleo won't get above 60 fps. I'll see what the various effects on Windows are with my NVidia card.

While I was at it, I also implemented a primitive, optional, form of motion blurring that you only get when you disable VSync; it's the one where the contents of the last frame are blended suitably with the current frame. With VSync enabled or at low framerates, this is a bad idea, but at the typically high framerates passionate VSync-disablers get, it should look sweet.

The implementation grabs the last frame into a texture and renders a screenfilling quad with this texture. That's one more copy operation (assuming the texture grab is really executed) than should be required, anyone know a better way in OpenGL?

Edit, for the record: here's the Windows way, not too complicated either: http://www.devmaster.net/forums/showthread.php?t=443
User avatar
Z-Man
God & Project Admin
Posts: 11717
Joined: Sun Jan 23, 2005 6:01 pm
Location: Cologne
Contact:

Post by Z-Man »

Ok, here are my Windows/NVidia findings, SDL gets it the wrong way round at least for my setup: If I make a VSync selection in the driver, the application cannot change anything. If I make no VSync selection in the driver, it defaults to "On" and needs to be switched away from that. SDL assumes the default is "Off" and only does something if you wish it to be on, result: nothing ever happens. So now there is special code for Windows as well.

I also noticed that the motion blur performance is abysmal on Windows (it was fine on my Linux work PC and acceptable on my home PC under Linux), I get a 40fold drop in the framerate if I enable it. I'll test whether using FBO's is less costy than glCopyTexImage2D.
WallyWallWhackr
On Lightcycle Grid
Posts: 18
Joined: Sat Mar 05, 2005 7:24 am
Location: behind a pair of handlebars

Post by WallyWallWhackr »

I have never seen ANY of my Arma installs get above 60fps in WindBlows.

And I am on an Nvidia 8800!

In Linux, however, I get hundreds of fps!

Can't see how my "PC" screamer gets taxed less than a "screamin' MAC tho.
and my fans aren't zippin when I play. Those MACs just must not know how to do it right... ;-]

Those TV commercials must be lying about you guys' capabilities.

Vista Ultimate rules! Screw overpriced Apple, and the iPUD, and all the overpriced software for them.

Bwuahahahaha!
User avatar
Ricochet
Round Winner
Posts: 359
Joined: Tue Jun 06, 2006 2:31 pm
Location: United Kingdom
Contact:

Post by Ricochet »

i can get over 250 fps depending on the server and amount of players.

im on windows, nvidia 7600gt w/ sli, 1gb ddr2, intel pentium dual core 2.8ghz...
User avatar
Z-Man
God & Project Admin
Posts: 11717
Joined: Sun Jan 23, 2005 6:01 pm
Location: Cologne
Contact:

Post by Z-Man »

FBOs work are quite neat. I haven't tested them in Windows yet (need to clean up the code and make use of glew instead of using the extension functions directly), but they are about as fast as the accumulation buffer thing Jonathan suggested last time for motion blur. Bonus: my doing the blending with the last frame directly in the front buffer and not swapping buffers, we get the same effect as disabling VSync even on systems that are hardlocked to have it enabled.

I'll probably write a comprehensive document about the two settings affecting the swap process and their implications on performance, power consumption and perceived image quality. Until then, and until a version actually supporting the new VSync setting comes out, stick to the following rules:

1. If you want to conserve CPU usage, switch the Swap Mode in the Performance Tweaks menu to either Fastest or Flush and set your operating system video driver to wait for VSync (*).
2. If you want guaranteed low latency between input and rendering, choose Finish in the same setting.
3. If you either want insanely high FPS for no good reason (or you are a Zen Swordmaster who can split an arrow in full flight in two, or a professional baseball batter, or some other person who can time his actions to millisecond accuracy) or your usual framerate lies between 20 and 100, disable VSync in your system settings. (**). Swap Mode should be Finish, unless experimentation shows that Flush gives more performance.

(*) On Linux with an NVidia card, set the shell variable __GL_SYNC_TO_VBLANK to 1. On a Mac with an ATI card, download the control panel linked to above in this thread. On Windows with an NVidia card, do nothing, the default is fine.

(**) That's the default for Mac and Linux, so nothing to do. Windows users check their documentation.
User avatar
andi75
On Lightcycle Grid
Posts: 44
Joined: Mon Dec 19, 2005 4:57 pm
Contact:

Post by andi75 »

Why not just add an SDL_Delay() to the mainloop? E.g. see the following BASIC snippet:

LET dt = getTimeSinceLastFrame()

SDL_delay( max(10 - dt, 1) )
User avatar
Z-Man
God & Project Admin
Posts: 11717
Joined: Sun Jan 23, 2005 6:01 pm
Location: Cologne
Contact:

Post by Z-Man »

I fondly remember that we did just something like that in the Windows only branch of our code:
http://forums.armagetronad.net/viewtopi ... 2675#22675
Ah, memories. They don't make flamewars like that any more.

The problem is that a manual delay gets you the worst tearing artifacts possible if the user or the operating system or the program has disabled VSync if you choose the delay too big: you get low framerates (and 100 FPS is low in this context), and disabled VSync only looks acceptable if you get high framerates. A short, configurable manual delay could be a good idea though; it gives the OS a good spot to do the other things it does beside running the game. That was the motivation for the delay in the Windows branch, IIRC. It also could work around the full CPU utilization bug in most drivers when you call glFinish().
Post Reply