The best advice is really dependent on the actual behaviour (do you have graphs etc?) of your database server.
The ConnectionPoolingMaxActive is primarily a safeguard to stop a runaway application from crashing your database as well. For example, when you do inefficient select queries that start using 100% cpu, or when you don't have enough disk cache in the operating system, and it starts reading from disk like crazy, then using even more active connections will only make the situation worse.
It seems that in your situation, the usage of connections is caused by inactive transactions that are waiting. This can be caused by changing the same single object in two different running actions. The second change will then block in postgresql until you either commit or abort the transaction in which the first change was made.
So, fixing the application would be the best to do. You can of course play with the connection pool settings as quickfix until the java action is fixed.
Also using more active connections will cause the postgresql processes to use more memory itself for backend processes (memory in which it's doing the joins etc), which reduces the amount of disk cache, which can lead to slowdowns because of disk activity if the active working set of the database cannot fit in memory any more.
We had issues like these when there was to little memory for Postgres. When you have multiple app engines you can define how to divide the memory. In this case to much was given to the model and to little to the database. The error went away when postgres had more memory.