Discussion:
Solaris swap issue
Ketan
2010-12-01 12:19:46 UTC
Permalink
One of our system is running 3 oracle db instances. And as per prstat o/p the system is approximately using 78G of swap memory


**********************************************************************************************
# prstat -J -n 2,15

PROJID NPROC SWAP RSS MEMORY TIME CPU PROJECT
4038 557 31G 29G 22% 113:23:43 10% proj1
4036 466 20G 19G 15% 2359:46:4 7.6% proj2
4023 452 25G 17G 13% 67:33:14 5.8% proj3
3 44 221M 226M 0.2% 105:55:41 1.0% default
0 141 859M 543M 0.4% 801:01:21 0.3% system
1 18 333M 329M 0.3% 6:41:31 0.0% user.root

but vmstat, swap -l shows approximately 115G free swap out of total 123G configured swap (zfs o/p)




swapfile dev swaplo blocks free
/dev/zvol/dsk/rpool/swap 256,1 16 4194288 849952
/dev/zvol/dsk/swappool/swap1 256,3 16 251658224


vmstat -S 1 3
kthr memory page disk faults cpu
r b w swap free si so pi po fr de sr lf s0 s1 s2 in sy cs us sy id
0 6 0 109555680 13515096 0 0 5119 16 21 0 8 0 0 0 0 26340 160046 36768 17 7 76
0 1 0 120233928 25672592 0 0 3992 0 0 0 0 0 0 0 0 15273 78922 15473 17 3 80
1 0 0 120220304 25661568 0 0 24 0 0 0 0 0 0 0 0 14509 66103 14879 19 2 79
0 1 0 120215496 25656360 0 0 39 0 0 0 0 0 0 0 0 17999 76188 20237 20 3 77


rpool/swap 2.03G 76.6G 2.03G -
swappool 120G 13.9G 18K /swappool
why there is difference in figures of swap in o/p of prstat and vmstat& swap -l
--
This message posted from opensolaris.org
Phil Harman
2010-12-01 12:57:45 UTC
Permalink
The difference is mostly because Solaris also uses free memory as a virtual swap device, and you have quite a lot of free memory.

As free memory is consumed, your configured swap devices will be used as a secondary preference.

You also need to understand the difference between reserved, allocated and used swap space. The command 'swap -s' is very useful for seeing this, and you'll find explanations in the docs.

Also, make sure you are not using the DISM feature with Oracle. And remember, you don't actually ever want tobe swapping for real.

Hope that helps. If you need more help you could always hire an experienced consultant ;)

Phil
www.harmanholistix.com
Post by Ketan
One of our system is running 3 oracle db instances. And as per prstat o/p the system is approximately using 78G of swap memory
**********************************************************************************************
# prstat -J -n 2,15
PROJID NPROC SWAP RSS MEMORY TIME CPU PROJECT
4038 557 31G 29G 22% 113:23:43 10% proj1
4036 466 20G 19G 15% 2359:46:4 7.6% proj2
4023 452 25G 17G 13% 67:33:14 5.8% proj3
3 44 221M 226M 0.2% 105:55:41 1.0% default
0 141 859M 543M 0.4% 801:01:21 0.3% system
1 18 333M 329M 0.3% 6:41:31 0.0% user.root
but vmstat, swap -l shows approximately 115G free swap out of total 123G configured swap (zfs o/p)
swapfile dev swaplo blocks free
/dev/zvol/dsk/rpool/swap 256,1 16 4194288 849952
/dev/zvol/dsk/swappool/swap1 256,3 16 251658224
vmstat -S 1 3
kthr memory page disk faults cpu
r b w swap free si so pi po fr de sr lf s0 s1 s2 in sy cs us sy id
0 6 0 109555680 13515096 0 0 5119 16 21 0 8 0 0 0 0 26340 160046 36768 17 7 76
0 1 0 120233928 25672592 0 0 3992 0 0 0 0 0 0 0 0 15273 78922 15473 17 3 80
1 0 0 120220304 25661568 0 0 24 0 0 0 0 0 0 0 0 14509 66103 14879 19 2 79
0 1 0 120215496 25656360 0 0 39 0 0 0 0 0 0 0 0 17999 76188 20237 20 3 77
rpool/swap 2.03G 76.6G 2.03G -
swappool 120G 13.9G 18K /swappool
why there is difference in figures of swap in o/p of prstat and vmstat& swap -l
--
This message posted from opensolaris.org
_______________________________________________
perf-discuss mailing list
Jaime Cardoso
2010-12-02 18:09:08 UTC
Permalink
Hello all

I was wondering if anyone has any kind of measurements on how to set up
your disks for Random I/O ops, since I have no way of making real
measurements between identical systems.

The thing is, the LUNs we get from an EMC symmetrix (yes, I know, awfull
for performance but it's what we have and can't get away from it) come
in 100GB sizes but, several times we have to set up a few of those LUNs
using some volume manager (SDS/LVM, VxVM or, soon, ZFS).

Some people here claim that it's best to do a concat since it's easier
to grow (true) and there is no performance penalty since all of those
disks are actually LUNs from the same EMC box but, that goes against
everything I ever did.

Things like DTrace toolkit seem to support my claim that stripe is
always best (we do not have a bottleneck on the fibre channel) but,
since I don't have the same machine to test both scenarios, I'd like to
ask if anyone ever tested something like this.

Thanks
--
JaimeC
Loading...