1Z1-058 (1Z0-058 beta) 11g R2 RAC Expert. Answers.
Last week’s article on 1Z0-058 questions was quite popular, more popular than I expected.
So if you enjoyed the questions, now it’s time to get some answers!
Kevin is a database administrator at Fictional Real Estate Co. All he hears everyday is location, location, location. He is asked by one of the Sales team members about Oracle Real Application Clusters. What should he reply to his colleague’s “what is it all about”?
If a painter can paint a wall in an hour, two painters are likely to finish the job in 30 minutes, and it would probably take three painters about 20 minutes. So, in theory, the performance does increase. However, clusters are not about performance, the performance of every individual node doesn’t change just because it’s part of a cluster. As a matter of fact, performance can decrease. Imagine that the painters in the above example have to share their tools. They will have to wait for each other, and even incur some overheads handing the tools over.
Same with RAC, if there is a sequence or an index block contention, the whole cluster is no better (and sometimes even worse) than a single node. That’s why cluster-specific performance tuning is very important in a RAC environment.
Redundancy. Every single part in RAC should not be a single point of failure. The disks are mirrored, the storage and the machines have two (or more) interfaces between them, virtual IPs, SCANs, bonded network interfaces, etc.
Some time ago I witnessed a rather spectacular failure of an Oracle cluster. Due to some environmental factors (no, not global warming), one of the disks failed. Not a big deal, it was mirrored. The controller started to rebuild the failed disk, and also failed under the load (don’t forget the environmental factor). Not a big deal either, there were two controllers. Unfortunately, the failed controller managed to ship corrupted data to one of the nodes, so the node decided to shut down. Again, not a big deal, there was another node.
After all those events, the cluster still was fully functional, with just one disk, one controller and one node left. All thanks to redundancy.
But what if the entire cluster fails? Read on!
Availability. Real Application Clusters are one of the many High Availability (HA) solutions offered by Oracle. Together with replication (whether it’s Data Guard, or Streams, or Golden Gate) they form Maximum Availability Architecture (MAA). This means that even if the entire cluster fails, there is another cluster, with an exact copy of the data, replicated by Data Guard or a similar solution.
Scalability is another big thing about clusters. Let’s go back to our painters. If we want to paint more walls, we can just use more painters. Provided they don’t share tools, the job will be done faster. With just one painter (single instance database) all you could do is train the guy to work faster (upgrade server hardware) but there’s always a limit to that (not to mention that just marginally faster hardware can cost double).
Clusters waste no resources. All those redundant parts are being constantly used, they don’t wait for the other part to fail. If you have three painters, all of them will be doing the job, not sitting idle waiting for their fellow painters to get a cold.
By the way, the painters can share the paint, which in the RAC world is represented by shared data storage.
Jenson is a database developer at Fictional Cluster Masters Co. He wants to pull a practical joke on the company’s DBA, so the DBA would think someone is performing a DDoS attack on the cluster. What should Jenson do in his application?
As mentioned above, eliminating all possible contention points is a must in RAC environments. A shared sequence with no cache is a massive contention point. If you wonder how I came up with this question, well, I didn’t. It is based on my experience, as I saw a RAC brought down to its knees by hundreds of sessions queuing up for a sequence.
May is a junior DBA at Fictional Calendars Co. She wants to start some RAC instances and stop some other RAC instances. What tools should she use from the list below?
Some people think that you can’t use the old friend of yours, sqlplus, to start up or shutdown instances. Not true. The only difference is that you can only start up or shut down the instance you’re connected to, not the whole cluster. For the cluster wide operations, use srvctl. No cluster-specific parameters (mode=rac, -rac, foo=bar) needed.
Nothing wrong with graphical tools either, as long as you know how to do things when graphical tools are not available.
April is a junior DBA at Fictional Calendars Co. She asks the company’s senior DBA August what the number of UNDO and TEMP tablespaces and REDO log groups should be in a RAC environment. If August knows what he’s talking about, what will he answer?
There is no difference for TEMP in a RAC environment. You can have just one, or more than one for different users if you wish.
UNDO, on the other hand, is not shared, so every instance has its own UNDO.
REDO is also owned by every individual instance and it’s recommended that you have at least two groups per instance.
Mark is a database administrator at Fictional Storage Specialists Co. He has built a 10g R2 RAC on Linux 4 before, using raw devices for voting disk and cluster repository storage, and ASM for data files. He now wants to build an 11g R2 RAC on Linux 5. What are his options when it comes to storage?
In 10g you would need to set up raw devices or use OCFS (Oracle Clustered File System). In Linux 5 raw devices were phased out (though could still be used), but with 11g R1 you could use block devices to store voting disk and OCR, so no need for raw.
Until 11g R2 RAC, you couldn’t use ASM for voting disk or OCR file because of the catch 22. The clusterware needed voting/OCR to start up, but ASM only started after the clusterware.
In 11g R2 RAC however, the ASM bit and the clusterware are now parts of a bigger beast, called Grid Infrastructure, all-in-one so to speak. You can now place everything on ASM, just tick “Use ASM” during installation.
Alternatively, you can place everything on NFS, as long as all the nodes have access to the share and the mount is the same.
Matthew is a system administrator at Fictional Storage Specialists Co. He is going to install Linux OS as part of task to build an 11g R2 RAC. What packages and/or libraries he may need, depending on the storage options?
To use ASM, you would need three RPMs installed before starting GI installer. They are oracleasmlib, oracleasm-support, oracleasm. Some of them can be found on the installation media if you use Oracle Linux, and some are online. If you use RHEL or CentOS or any other non-Oracle flavour, just download all three for the respective kernel.
NFS doesn’t need any specific packages, just mount the shares on all nodes before GI installation. Make sure that the paths are the same on every node, and that the operating system user and group IDs for the “oracle” user and the “oinstall” groups are the same on all machines and storage devices.
Optionally you can use Oracle Direct NFS, this is recommended for performance reasons. The library is called libodm11.so and it’s only a matter of creating a symbolic link and restarting the instance. The performance is gained by bypassing the operating system cache which releases the memory and speeds up the I/O.
Dorothy has more than 20 year experience administering single instance Oracle databases. She has developed an extensive library of shell scripts over all those years. She now wants to modify the scripts to work in an Oracle RAC environment. What views/columns would she use to determine instance IDs?
This one is trivial. The V$ views are still there, but they only reflect the instance you’re connected to. For cluster wide data use GV$, for example GV$SGA_RESIZE_OPS, and the instance ID will be in the inst_id column.
By the way, the GV$ views are also available in single instance databases. So Dorothy wouldn’t need two sets of scripts, she should be able to use the modified versions on both single instance and RAC databases.
Tom is a network administrator and Dick is a system administrator at Fictional Links And Nets Co. They are working with the company’s DBA Harry building an 11g R2 RAC environment. They are reasonably confused about the number of network interfaces the DBA needs for the cluster, and ask for an explanation. What should Harry tell the pair of them?
If one of the painters needs a brush that is being used by another painter, they would exchange the brush directly, hand to hand, without the need of putting the brush back into the toolbox and picking it up from the toolbox. This is called
toolbox cache fusion. If an instance needs some data block that is used by another instance, it will be shipped via the private interconnect.
Lily and Daisy are junior DBAs at Fictional Bad Blocks Co. They have heard that their senior DBA Ash had performed an RMAN recovery in a 11g R2 RAC environment. They are very keen to learn if there are any differences to single instance recovery. If Ash knows what he’s talking about, what will he tell them?
Using RMAN isn’t much different in a RAC environment. Recovery catalogs are optional. No need to copy the backups/logs to the local disks.
Only the node that you’re running RMAN on needs the access, the other node(s) can be down for all you know.
John is a database administrator at Fictional Just On Time Co. He needs to schedule various jobs against the company’s 11g R2 RAC database, and some of those jobs must run on the first node only to meet a legacy application’s requirements. What are the options?
This one is even more trivial. Perhaps it doesn’t need any explanation at all.
The reason for having this question was to actually show that in a RAC environment there is no guarantee which node will run the job.
If you want to make sure your job runs on a particular node, set its affinity, or connect specifically to that node. Just note that if the node is down, the job will not run.
Mario is a database administrator at Fictional Dash Nine Co. He needs to terminate a user’s session and issues an ALTER SYSTEM KILL SESSION ‘123,456’ command, but receives an error that the session doesn’t exist. He realizes that he is connected to the second node and the session is running on the third node of the cluster. What is the easiest way of achieving the desired outcome?
Hey, session, where a-a-a-are you-u-u-u? I am at three-e-e-e-e.
Connecting to the third node would also work. If you can entertain yourself by jumping from node to node, why not. All you need is create another entry in your tnsnames.ora, adding the following line
... (INSTANCE_NAME = foobar3) ...
... (SERVICE_NAME = foobar.domain.wherever) ...
Summer is a database administrator at Fictional Patience Co. The company has several 11g R2 RAC databases, some are one-node, some are two-node, some three-node, and some four-node. Summer is going to review cluster-specific waiting events for each database. Which of the following statements are true?
When painter X needs a brush, he says – hey, give me that brush. That’s the first step. Painter Y then gives him the brush. The interaction was two-step, wasn’t it. In Oracle RAC world, it is called two-way
brush block request.
Now painter Z asks painter Y. Painter Y doesn’t have the brush any more, but tells painter X to give the brush to Z. And that was three way.
Depending on how often every painter uses the brush, they would carry the Brush Master title. If someone needs the brush, they would ask the Brush Master, and if the Brush Master doesn’t have the brush at the time of the request, he would ask to hand it over to whoever requested it on their behalf.
Let me know if you think you’ve had enough of those painters. Tell me, we all are IT professionals, “cache fusion” sounds so much better than “toolbox with paintbrushes”. Or not?
Please note that all these questions are in no way an indication of the actual exam questions. Names and company names are fictional. This line is italic. Some things may or may not be obvious.