- simple idea
- 1to1 ASM to
server
- shared disk
groups
- wide file
striping
- 10years old in
oracle database
grid computing .. cloud computing
- more
consolidation → more database instances per node
- more clustering
→ more nodes in clusters
- larger storage
configurations
ASM evolves
- maximize ASM
robustness because increased number of database instances running on
large servers
- minimize ASM
overhead because of large clusters
- minimize
cluster reconfiguration overhead because of large clusters
engineered systems (EXADATA and database appliance)
- complete
solutions
- optimized for a
particular objective
- storage
management provided by ASM
- storage cells
provides management offloading
flex ASM (NEW)
- eliminate
requirement for an ASM instance on every cluster server
- database
instances connect any ASM instance in the clustering database
instances can failover to a secondary ASM instance
- admins specify
the cardinality of the ASM instances (default 3)
ASM network
in the previous version an oracle cluster required:
- public network
for client applications
- one or more
private networks for interconnect communication within the cluster
remote access (NEW)
in the previous versions database instances use os authentication to connect ASM
ASM clients and the server are always on the same server so this should change
12c database instance and ASM servers can be on different servers
- flex ASM users
password file authentication
- password file
in the ASM diskgroup
- default config.
is configured with the ASM installation
other flex features
- increased mac
number of disks to 511 (previous 63)
- command for
renaming ASM disk
- ASM instance
patch-level verification
- replicated
physical metadata
- improves
reliability
- virtual
metadata has always been replicated with ASM mirroring
admins can now specify a failure group repair time (NEW)
- similar to
existing disk repair time
- new diskgroup
attributes; failgroup_repair_time
- default time 24
hours
- power limit can
be set for disk resync. operations
- Conceptually
similar to power limit setting for disk group rebalance
- rebalance: 1
(least resource) to 1024 most resources
disk rescync now check points (NEW)
interrupted resync operations are automatic restart
sometimes rebalance operations are required to restore redundancy.
Example:
a disk fails and no replacement available
HBA containing a failure group goes offline
- with oracle
database flex ASM optimized reorganization
- critical files
such as control files log files are restored before datafiles
- secondary
failure is less likely result in critical file loss
- admins can now
specify the content type for each disk group:
- new disk group
attribute; content.type (possible values data, recovery or system)
- disk group
primary/secondary partnering changes with content.type
- decreases
likelihood that multiple failures cause data loss
silent data corruption is a fact in todays storage world
- database checks
for logical consistency when reading data
- if a logical
corruption is detected then automatic recovery can be performed
using ASM mirror copies
- for seldom
accessed data, over time all mirror copies of the data could be
corrupted
- with
oracle db 12c data can be proactively scrubbed (NEW)
- scrubbing
occurs automatically during rebalance operations
- with flex ASM,
most rebalance tasks offloaded to EXADATA storage
- each offload
request can replace numerous IO
IO distribution in 12c (NEW)
- each read
request is sent to at least loaded available disk
- even read is
transparent to apps and enabled default
- users on IO
bound systems should notice a performance improvement
managing flex ASM (nothing changes)
- srvctl
status asm
- srvctl
modifiy asm -count 4
- srvctl start
asm -n node_name
- srvctl stop
asm -n nodename
ASM Trivia's
why power limit setting value to 11. Why 11?
what was it is called before ASM?
all asm file extent pointers are protected from corruption by XORing to a constant.. what is that constant? 42
The acronym describing ASM data allocation and protection policy is LIKE what?