![]() |
Created by sanjay kumar
about 7 years ago
|
|
-XX:+HeapDumpOnOutOfMemoryError IGate JVM options -XX:+UseCompressedOops -verbose:gc -Xloggc:/u01/igate/liferay_home/jboss-eap-6.1/standalone_node1_1/log/gc.log.09_05_2018 -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+HeapDumpOnOutOfMemoryError -d64 -Xms4192m -Xmx4192m -Xmn2700m -XX:ParallelGCThreads=16 -XX:ThreadStackSize=2048k -Djboss.modules.system.pkgs=org.jboss.byteman -Djboss.modules.system.pkgs=org.jboss.byteman,com.singularity -XX:MaxPermSize=1024m -XX:SurvivorRatio=6 -XX:ReservedCodeCacheSize=96m -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+ScavengeBeforeFullGC -XX:+CMSParallelRemarkEnabled -XX:+CMSClassUnloadingEnabled -XX:CMSInitiatingOccupancyFraction=68 -XX:+UseCMSInitiatingOccupancyOnly
** Please note that jmap Heap Dump generation will cause your JVM to become unresponsive so please ensure that no more traffic is sent to your affected/leaking JVM before running the jmap utility ** When to use Analyzing JVM heap dumps should not be done every time you are facing a Java heap problem such as OutOfMemoryError. Since this can be a time-consuming analysis process, I recommend this analysis for the scenarios below: The need to understand & tune your application and/or surrounding API or Java EE container itself memory footprint Java heap memory leak troubleshooting Java classloader memory leaks Sudden Java heap increase problems or trigger events (has to be combined with thread dump analysis as a starting point) http://javaeesupportpatterns.blogspot.com/2011/11/hprof-memory-leak-analysis-tutorial.html How to take the heapdump: /bin/jmap -heap:format=b The file created in this case will be like " .hprof" The main difference from a user perspective - which I think the previous answer does not stress enough - is that Metaspace by default auto increases its size (up to what the underlying OS provides), while PermGen always has a fixed maximum size. You can set a fixed maximum for Metaspace with JVM parameters, but you cannot make PermGen auto increase. Prior to JAVA 9 GC logging is not a threadsafe [Synchronized -means only on thread can access at given point of time-singleton design could make this possibility] operation Long GC pause means like 25 seconds and more , indicates that there is something wrong outside the jvm which is causing the long GC pauses. Generally GC are not responsible for such long pauses Packet losses are observed between your client and remote DB server which can lead to intermittent The network adapter could not establish the connection errors # Send packets to the remote database IP address and check the transmitted and received packages and also user traceroute to see Validation of the connectivity and route through the different HOP(s)) from the server to client ping
There are no comments, be the first and leave one below:
Want to create your own Notes for free with GoConqr? Learn more.