Oracle

ASM installation on Windows

This article also assumes you are using Oracle version 11g Release 2 -specifically, 11.2.0.4. That is the latest version of Oracle available for general free download at the time of writing, and if it’s not the version you are using, I suggest that you upgrade immediately. I should add that I am using and describing the 32-bit version of Oracle 11g, and have no idea whether the 64-bit version works similarly or not at all!

In this article, as well as simulating a 6GB ASM array (which will require 6GB of free disk space), I’m going to show you how to convert a pre-existing database that is stored on a traditional file system into one that uses ASM exclusively. That is an important capability of Oracle’s RMAN tool, and the instructions therefore apply equally to Linux and Unix users as to Windows ones. I will also show you how to get Enterprise Manager to administer and monitor your ASM array -and that again is something that applies equally to all ASM users, not just Windows ones.

For these additional parts of the article to make sense, however, I am further going to assume that you have already used the Database Configuration Assistant (DBCA) to create a database called ORCL using the ‘general’ database template supplied by Oracle. (The specific template doesn’t matter, so long as it’s a fully-functional database stored on an NTFS drive. If your database name is different, though, you will have to adjust parts of the scripts and examples I use in this article).

In short: if you are using a decent version of Windows, running 11g Release 2, already have a database, and have about 6GB of free disk space at your disposal, then please read on, because what follows is amazing, incredibly useful, powerful… and the future of Oracle database storage!

This article will show you how to simulate ASM on a desktop or laptop PC running a version of Windows. If you are installing onto a physical machine, you can safely ignore any bits where the article mentions VMware-specific issues (such as installing VMware Tools), but the general principles (such as using a static IP address) are valid in all cases.

2.0 Hard Disk Faking Methods

ASM takes units of physical storage and wraps them up into a logical volume which is then presented to databases as a single chunk of available storage. To create an ASM storage array on a laptop or desktop PC, therefore, you need to be able to present Oracle with a number of ‘chunks’ of physical storage. One way to do that would be to attach half a dozen hard disks to your laptop, of course… but that would make for an, er… interesting laptop! A slightly more practical way of doing it would be to re-partition your existing hard disk and to carve out half a dozen fresh, unformatted logical partitions. That’s certainly do-able, but it’s very intrusive and disruptive: chances are, you’d rather not mangle your existing PC quite so dramatically.

There is, of course, always the option of installing VMware and creating half a dozen virtual hard drives for a brand new virtual PC. That is the sort of approach I use a lot myself for my Laptop RAC articles, after all, so it is definitely do-able -and even more so now that VMware Server is available for free. Unfortunately, installing Windows into a virtual PC is supposed to require an additional full license from Microsoft (meaning: pay for a fresh copy), and in any case virtual PCs run slower than their physical counterparts. Slow and expensive are not ideal qualities for this sort of learning experience!

So, in this article, I am going to fake hard disks using much the same sort of technique as I used in my Laptop ASM on Linux article: create a set of ‘solid’ files and get Oracle to recognise them as usable storage, as though they were hard disks. In Linux, you make this approach work by using operating system loopback devices, and the operating system itself is thus fooled into thinking it has new hard disks at its disposal. There is no equivalent approach for Windows, however. Instead, we have to use an init.ora parameter that fools the Oracle instance into thinking a handful of files represent physical storage. This approach will certainly work, but it relies on a hidden initialisation parameter, and is therefore completely, 100% an absolute no-no in a production setting!

It’s easier to do in practice than to describe in theory, so let’s begin!

3.0 Creating the Fake Hard Disks

As part of the standard 11g Release 2 installation, Oracle ships an executable called asmtool, and it can be used to create ‘solid’ files -solid in the sense that they are filled with zeros, as opposed to being full of empty space. These are the sorts of files ASM demands if it is to treat them as hard disks. The utility is found in the ORACLE_HOME\bin directory, which is generally made part of your PATH during a conventional Oracle installation. You should therefore be able to invoke the tool from anywhere, and hence the following commands should work:

md c:\asmdisks
asmtool -create c:\asmdisks\disk1 1000
asmtool -create c:\asmdisks\disk2 1000
asmtool -create c:\asmdisks\disk3 1000
asmtool -create c:\asmdisks\disk4 1000
asmtool -create c:\asmdisks\disk5 1000

That lot will create you five files each 1GB in size (that is, 1000MB) and store them in a C:\ASMDISKS directory (which could have been called anything, of course).

And that’s all there is to it. You now can go about creating an ASM instance which will treat these five files as though they were 5 1GB hard drives.

4.0 Creating the ASM Instance

Traditional Logical Volumes have traditional Logical Volume Managers. ASM has a ‘logical volume manager’, too -but it happens to be in the form of an Oracle instance, complete with Shared Pool, Large Pool and a veritable flotilla of background processes. All up, an ASM instance will be somewhere between 64MB and 100MB in size, which is not too excessive, all things considered.

Before you can create an ASM instance on 11g Release 2, though, you must first run a script which instantiates the Oracle Cluster Registry and its associated processes & services. That’s done by issuing the command:

C:\oracle\product\10.2.0\db_1\bin\localconfig
add

You might wonder why you have to muck about with clustering technology on a single desktop PC, but that’s just the way things are done: even single-instance, single-node, non-clustered Oracle uses a cluster registry to record the existence of ASM instances, and regular(database) instances are hard-coded to check it to find out whether any ASM instances exist that are capable of offering and providing storage facilities. Anyway, if you don’t run that script, any attempt to start an ASM instance will return with the error ‘Cannot contact Cluster Manager’, so run it now!

Now, with that done, it would be nice if we could use 11g Release 2’s dedicated ASM Instance creation tool… but because we’re going to have to use an unsupported hidden initialisation parameter to get the ASM instance to use our fake hard disks as though they were the real thing, we can’t. The Database Configuration Assistant’s new option to create ASM instances will not allow any non-standard parameters to be specified, which means, for us, it is crippled and of little use. Not so in a true production environment, of course, but for the purposes of this article, it’s a non-starter.

Instead, we’ll do everything at the command line. The first thing we’re going to do is to create a new init.ora for the future ASM instance:

cd \oracle\product\11.2.0\db_1\database
notepad init+ASM.ora

In the new document that last command creates, enter the following initialisation parameters and values:

INSTANCE_TYPE=ASM
DB_UNIQUE_NAME=+ASM
LARGE_POOL_SIZE=8M
ASM_DISKSTRING='c:\asmdisks\*'
_ASM_ALLOW_ONLY_RAW_DISKS=FALSE

Save the modified file, and check that it hasn’t accidentally been given a .txt extension (one of Notepad’s favourite tricks for the unwary!)

As you can see, an ASM Instance’s init.ora is pretty thin stuff! It has to be identified as an ASM instance (INSTANCE_TYPE=ASM, instead of RDBMS), because that way it won’t keep expecting to open a database. It has to be given a unique name, and it must have a large pool (which is optional for regular instances). The ASM_DISKSTRING parameter tells the instance where to find its physical storage units, and in this case is simply the directory where our solid files are stored.

All of that is standard stuff (the specific value for ASM_DISKSTRING will vary in a production environment, of course). The really unusual parameter here is the last one: it starts with an underscore parameter, which makes it a hidden parameter and thus completely unsupported. By disallowing only the use of raw disks, it allows the use of files which look like raw disks, and is thus the magic ingredient that lets this simulation work. You would never use it in a production setting, though.

This being Windows, one further thing needs taking care of before our ASM instance can run: instances on Windows need a Windows Service to be created which will provide them with their memory and threads. That always has, and still is, created by running the oradim utility like so:

oradim -NEW -ASMSID +ASM -STARTMODE auto

The only change between this and the standard ORADIM syntax used to create ‘regular’ instances is that it uses a -ASMSID switch instead of a -SID one. Otherwise, it works identically, and should mean that your ASM instance is now running.

You can test whether that is indeed the case by trying to connect to it, of course:

C:\>set ORACLE_SID=+ASM
C:\>sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 - Production on Tue December 30 07:45:13 2018

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> show user
USER is "SYS"

Notice I have to set the right ORACLE_SID first, but otherwise you connect to ASM instances exactly as you would to a normal instance -though you have to do it as SYS, because ASM instances can’t consult a data dictionary to check up on ordinary users and their passwords. Anyway: we know this instance is running because otherwise, we’d have gotten a message that said, ‘Connected to an idle instance’. But if you want incontrovertible proof:

SQL> select status from v$instance;

STATUS
------------
STARTED

Which goes to prove, incidentally, that a lot of the V$ views you are used to with standard instances work perfectly well in ASM instances, too.

5.0 Creating the ASM Diskgroup

Once we have an ASM instance running, we can check whether it knows about our fake hard disks, because if it doesn’t know about them now, it won’t be able to turn them into usable storage later:

set pagesize 3000
set linesize 132
column path format a55
select path, mount_status from v$asm_disk;

PATH MOUNT_S
------------------------------------------------------- -------
C:\ASMDISKS\DISK1 CLOSED
C:\ASMDISKS\DISK2 CLOSED
C:\ASMDISKS\DISK5 CLOSED
C:\ASMDISKS\DISK4 CLOSED
C:\ASMDISKS\DISK3 CLOSED

(Tip: if this query initially returns no rows, check that your init+ASM.ora doesn’t have ‘curly quotes’ around the ASM_DISKSTRING value: they need to be plain old ‘straight quotes’, but cutting and pasting from various browsers on Windows can change that).

You’ll notice all five ‘disks’ have been detected according to this special ASM V$ view, but are currently ‘closed’ -because I haven’t turned them into a disk group yet. That requires that I issue this command:

create diskgroup DGROUP1 normal
redundancy disk 
'c:\asmdisks\disk1',
'c:\asmdisks\disk2',
'c:\asmdisks\disk3',
'c:\asmdisks\disk4',
'c:\asmdisks\disk5';

Once that command completes, the earlier query now returns this result:

select path, mount_status from
v$asm_disk;

PATH MOUNT_S
------------------------------------------------------- -------
C:\ASMDISKS\DISK1 CACHED
C:\ASMDISKS\DISK2 CACHED
C:\ASMDISKS\DISK3 CACHED
C:\ASMDISKS\DISK4 CACHED
C:\ASMDISKS\DISK5 CACHED

This time, a status of ‘CACHED’ means the disks have been ‘wrapped’ into a single 5GB diskgroup and are available to have databases make use of them.

All that remains to do is to make sure that the ASM instance automatically “mounts” this new diskgroup every time it starts up. That is done by adding a new parameter, ASM_DISKGROUPS, to the initialisation file, and you can achieve that most simply by cutting-and-pasting these commands:

create spfile from pfile;
startup force;
alter system set asm_diskgroups=DGROUP1 scope=spfile;
startup force;

After the second of those forced startups, you should see this:

ASM instance started

Total System Global Area 79691776 bytes
Fixed Size 1247396 bytes
Variable Size 53278556 bytes
ASM Cache 25165824 bytes
ASM diskgroups mounted

…with the last line there indicating success in auto-mounting the disk group.

6.0 Migrating a Database to ASM

There is no requirement for a database to use ASM storage exclusively: it’s perfectly legitimate for a database to have some of its datafiles on ‘conventional’ storage and some inside an ASM array, in other words. But, practically, you won’t want to run in such a transitional, half-and-half, state for long. ASM is supposed to make storage administration easier, not complicate things! Ultimately, therefore, you will probably want to convert a database that is using conventional storage into one that uses purely ASM -and there’s only one tool which can do that (because it’s the one tool that know how to read from and write to both conventional file systems and ASM): Oracle’s Recovery Manager (RMAN).

The basic principle of conversion is easy enough: you get RMAN to backup your existing database ‘into’ the ASM array, and then you tell RMAN to make the just-performed backup the actual production database with the new 11g command switch database to copy. This principle applies equally well if you want to move a database from one ASM storage array to another, or indeed from an ASM array back to a conventional file system… it’s always the same idea: backup to your desired destination, whatever that may be, and then “switch” to the backup.

There are just two flaws with this inspirationally-simple idea: RMAN never backs up temporary tablespace or online redo logs, so those require a bit of post-conversion manual tinkering to get right!

Apart from that, though, it’s really very simple -but it’s best if I work my way through the process to prove the point, rather than just claiming it to be so!

6.1 Preparing the Database

Before you begin backing up the database, you need to set a couple of its initialisation parameters to new values, as follows:

alter system set db_create_file_dest='+DGROUP1'
scope=spfile;
alter system set db_recovery_file_dest='+DGROUP1' scope=spfile;
alter system set db_recovery_file_dest_size=3000M scope=spfile;
alter system reset control_files scope=spfile sid='*';

The first two commands are needed so that RMAN knows by default where to write its backups and where to create new online redo logs and datafiles. The third command is required so that RMAN doesn’t create future control files in their current location: once CONTROL_FILES is reset the DB_CREATE_FILE_DEST parameter becomes the location where control files are created -and, in my case, that means ‘inside the ASM diskgroup’, which is what we’re after, of course. (I’m ignoring the possible use of DB_CREATE_ONLINE_LOG_DEST_, which takes precedence over the CREATE_FILE_DEST parameter, if set).

Remember that these are settings that are applied to the real, conventional instance -so don’t forget to set ORACLE_SID=orcl before trying to connect with SQL*Plus (assuming your database name is ‘orcl’, of course!)

Once those three commands have been issued, the next two commands are simple enough:

shutdown immediate
startup nomount

We want a nice, clean shutdown so that we don’t have to perform recovery on an inconsistent backup of a badly shutdown database, and we need an instance for RMAN to connect to, but can’t go any further than that, because we’ve unset the details of where to find the database’s control files.

6.2 Use RMAN to Bulk Migrate

You’re now ready to fire up RMAN and backup the database “into” the ASM array. The commands are really pretty standard RMAN stuff:

C:\>rman target /

restore controlfile from 'C:\oracle\product\11.2.0\oradata\orcl\control01.ctl';

alter database mount;

backup as copy database format '+DGROUP1';

switch database to copy;

alter database open;

exit;

The first command causes RMAN to copy the specified source controlfile into the +DGROUP1 disk group. It will actually copy it twice, because both the CREATE_FILE_DEST and RECOVERY_FILE_DEST parameters have been set. Once the copies are safely inside the ASM array, the next command causes the instance to mount them: the operative controlfiles are now ASM-based, not file system ones.

Take an image copy of each of the datafiles of the database in turn. The copies are made into the ASM diskgroup, because that’s what the FORMAT clause says to do.

The real magic then happens with the next command: ‘switch database to copy’ means ‘that backup you just took should now be regarded as the operative database, not just a copy’… and, of course, that means all my datafiles are now inside the ASM array. I can therefore open the database, and quit RMAN.

6.3 Sorting out the Extra Bits

RMAN gets the bulk of the database conversion right without too much pain, I think you’ll agree. But as I mentioned earlier, it doesn’t know how to cope with temporary tablespaces nor with the online redo logs, as these queries executed on the newly-opened database nicely illustrate:

SQL> select name from v$controlfile; 

NAME
-----------------------------------------------------
+DGROUP1/orcl/controlfile/backup.256.591786977
+DGROUP1/orcl/controlfile/backup.257.591786979

SQL> select name from v$datafile;

NAME
-----------------------------------------------------
+DGROUP1/orcl/datafile/system.258.591787015
+DGROUP1/orcl/datafile/undotbs1.261.591787141
+DGROUP1/orcl/datafile/sysaux.259.591787091
+DGROUP1/orcl/datafile/users.262.591787149
+DGROUP1/orcl/datafile/example.260.591787125

SQL> select member from v$logfile;

MEMBER
-----------------------------------------------------
C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO03.LOG
C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO02.LOG
C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO01.LOG

SQL> select name from v$tempfile;

NAME
-----------------------------------------------------
C:\ORACLE\PRODUCT\11.2.0\ORADATA\ORCL\TEMP01.DBF

So, we see the scale of the problem.

Well, the easiest issue to tackle first is the temporary tablespace one:

alter tablespace TEMP add tempfile;

alter database tempfile 'C:\ORACLE\PRODUCT\11.2.0\ORADATA\ORCL\TEMP01.DBF' drop;

It might surprise you thatyou can drop temporary files from a temporary tablespace, but you’ve been able to do that since version 8i, so it’s not a new feature! In any case, the effect of those two commands is seen easily enough:

SQL> select name from v$tempfile;

NAME
-----------------------------------------
+DGROUP1/orcl/tempfile/temp.265.591787645

Which leaves just the online redo logs to sort out. There’s no easy way of migrating them: the best you can do is to create new groups inside the ASM array and drop the old ones that are outside of it:

alter database add logfile size 50m;
alter database add logfile size 50m;
alter database add logfile size 50m;

SQL> select group#, status from v$log order by group#;

GROUP# STATUS
---------- ----------------
1 INACTIVE
2 CURRENT
3 INACTIVE
4 UNUSED
5 UNUSED
6 UNUSED

You can see the original three logs, and the new three. I can’t drop my original group 2, though, because it’s marked as CURRENT. But I can at least start by dropping groups 1 and 3:

alter database drop logfile group 1;
alter database drop logfile group 3;

In theory, a log switch or two should then cause Group 2 to become inactive:

SQL> select group#, status from v$log
order by group#;

GROUP# STATUS
---------- ----------------
2 ACTIVE
4 CURRENT
5 UNUSED
6 UNUSED

One switch won’t do it, because a log becomes ACTIVE before it becomes INACTIVE, and only INACTIVE logs can be dropped. Maybe one more switch?

SQL> select group#, status from v$log
order by group#;

GROUP# STATUS
---------- ----------------
2 ACTIVE
4 ACTIVE
5 CURRENT
6 UNUSED

But no: now two logs are active, and so it goes on. In fact, for this database which is not in archivelog mode, I find that this is required:

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup
ORACLE instance started.

Total System Global Area 167772160 bytes
Fixed Size 1247900 bytes
Variable Size 79693156 bytes
Database Buffers 83886080 bytes
Redo Buffers 2945024 bytes
Database mounted.
Database opened.

SQL> select group#, status from v$log order by group#;

GROUP# STATUS
---------- ----------------
2 INACTIVE
4 INACTIVE
5 CURRENT
6 UNUSED

SQL> alter database drop logfile group 2;

Database altered.

SQL> select member from v$logfile;

MEMBER
---------------------------------------------------------
+DGROUP1/orcl/onlinelog/group_4.266.591787973
+DGROUP1/orcl/onlinelog/group_4.267.591787993
+DGROUP1/orcl/onlinelog/group_5.268.591788059
+DGROUP1/orcl/onlinelog/group_5.269.591788069
+DGROUP1/orcl/onlinelog/group_6.270.591788083
+DGROUP1/orcl/onlinelog/group_6.271.591788093

6 rows selected.

It’s a bit long-winded, but an instance bounce clears the spurious ‘ACTIVE’ status, allowing me to finally drop the last of the original log files. As the final query shows, I now have three groups, multiplexed twice (because I set both CREATE_FILE_DEST and RECOVERY_FILE_DEST earlier), safely inside my ASM array.

Leave a comment