[p4] Server performance questions.

Eric Dew ericd at resilience.com
Fri Oct 19 10:33:53 PDT 2001


I agree with Stephen Vance's suggestion to look for the root causes.

Still, your 750+ users is pretty large.

I used to work at a place with only about 60 users.  However, we did 
have about maybe a little over
0.5million files and the db.have hovered around 1 Gig (when I 
started...it could be around 1.7 now).
Don't recall what the db.rev size was.

It one time, we ran p4d on a windows NT with admirable specs.  But, we 
had similar problems:  simple
tasks like syncing just a directory's worth of files took an inordinate 
amount of time (the ol' p4 red
sneakers were getting a workout).  Checkpointing took well over 2 hours, 
and that's for weekly
checkpointing.

We moved the depot and p4d to a Sun machine and performance increased 
greatly.  I don't work for
Sun, so I guess I don't have a particular interest on whether you use it 
or not.  However, if you do plan
to make the switch, there will be one very difficult process in the 
transition, and that's making sure
the capitalization of file names are correct (relative to whatever is 
found in the various db.* files, and
among variations between users).  If you had people check files in from 
a Mac, then you might also
have to deal with non-standard filenames (sometimes, some files in Macs 
will have (c) or (tm) symbols
in the filename...good luck dealing with them, as a legacy issue...once 
you make the transition, it's
no big deal, I think).

I would like the opportunity to shill for my company, as we make fault 
tolerant computers.  But your
750+ users scares me and I don't want to sell you something that might 
not do as advertised given such
a huge load.

EDEW
ericd at resilience.com

Russell C. Jackson wrote:

>Hi everyone,
>
>We are experiencing some growing pains, and I wanted to ask a few questions
>to those of you on the list with large servers.
>
>First off, here are the stats on our server.
>
>Users: 750+ with more than 2/3 of the users working over the WAN. Most users
>are Windows users.
>Files: 1.5 Million+
>Actual space used for depot: 120+ Gigs
>db.have - 1.7 Gigs minimum (I remove old clients regularly to keep this as
>small as possible.)
>db.rev - 650 Megs
>
>Our server is currently a dual 600 Mhz machine running Windows NT, SP6a with
>4 Gigs of RAM, a RAID 5 array for storage and a Gigabit NIC.
>
>During normal work hours, the server utilization is usually between 70% and
>100%. Based on Perforce's numbers in the Performance tuning section of the
>tech notes, this seems really high since they indicate that a 140 Mhz single
>processor Sparc machine should be able to handle 700 - 800 users. I suppose
>that might be true if all of the users were Unix users that didn't use the
>GUI, and didn't run any server intensive commands, but that isn't reality.
>
>Main problems we are seeing when everyone is working:
>
>1. Sync of single small file can take several minutes to complete, and it is
>the server side that is waiting because it sends the file over, and then
>sits there. I timed one sync today, and it took 7 minutes to complete on a
>file that normally syncs out in 2 seconds.
>
>2. Submit can show the same problem. It will send the files over, but then
>the server will sit there for minutes before it completes the command.
>
>3. Similar problems with other commands that would normally complete very
>fast.
>
>Note: This doesn't happen with every single request, but it happens
>frequently enough to be causing the users a lot of problems.
>
>Questions:
>
>1. Has anyone else seen similar problems? If so, did you find a solution, or
>at least a cause?
>
>2. If you have a similar, or larger size server:
>	A. What is your configuration?
>	B. How many users do you have remote?
>	C. How many of your users use the GUI?
>
>3. Do any of you have any recommendations about whether faster hardware is
>the way to go, or should we be looking into splitting this server into
>multiple smaller servers? Multiple smaller servers is a pain in terms of
>code management, server maintenance, and costs is why we have avoided going
>that route so far.
>
>Thanks,
>Rusty
>
>Russell C. Jackson
>rcjackson at alum.atu.edu
>
>
>_______________________________________________
>perforce-user mailing list  -  perforce-user at perforce.com
>http://maillist.perforce.com/mailman/listinfo/perforce-user
>
>





More information about the perforce-user mailing list