Thursday, September 26, 2013

OOW Notes - Database Cloning and Direct NFS

Wednesday 11:45 - Cloning Oracle DATABASE


snapshot PDB – pluggable database cloning


underlying filesystem must support storage snapshots
  • asm and asm cluster file system (acfs)
  • nfs accessed with direct nfs
  • integrated with zfs, netapp, emc

user must have privileges to create and destroy snapshots
snapshots automatically mounted in the database node
snapshots automatically deleted when clone PDB is dropped
storage credentials saved securely in transparent data encryption keystore


cons.
  • source PDB cannot be remote pdb
  • source PDB nedds to be read only while cloning
  • source PDB cannot be unplugged/dropped while clones exist


v$dba_pdb_history used to monitor pdb clones (clonetag field indicated if PDB was cloned using snapshot copy)


pdb snapshot cloning on ZFS storage app.


Step by step guide on creating thin provisioned clones
describes how storage credentials are saved in the database store


pluggable database cloning


full copy creation time is proportional to PDB size
  • over 9 hours to clone a 1.3Tb PDB
  • snapshot copy clones a 1.3Tb in under 6 minutes.




Thin provisioned PDBs result huge space savings
Over 99%space saving compared to the source pdb size


direct nfs clone db


  • first introduced 11.2.0.2
  • clone production databases with minimal impact
  • prod data is safe and secure
  • uses a simple RMAN backup
  • refresh test instances with RMAN incremental backups
  • based on copy on write technology
  • huge storage space saving with thin provisioning
  • works with single instance and rac databases


  • easy to setup
  • works on all platforms
  • instantaneous cloning
  • no copies of the data
  • create multiple clones from a single backup
  • integrated with zfssa, netapp, emc sanpshots
  • v$clonedfile provides info on cloned files


references
MOS note 1210656.1


clone db used cases


  • hw/sw upgrades
  • app/OS patching
  • backup verification
  • app development and testing
  • recover oracle objects
  • run readonly report queries


%10 performance downgrade on test systems


direct nfs client


  • first introduced 11gR1
  • supports NFSv3, NFSv4, NFSv4.1 (except parallel NFS)
  • massively parallel IO architecture (each oracle process creates its own connection to NFS server)
  • simplifies client management and eliminates configuration errors
  • consistent configuration and performance across different platforms (even on Microsoft Windows)
  • significant cost savings for database storage (direct NFS is a free oracle option )




improved NAS storage performance
  • optimized NFS client for database workloads
  • support for direct IO and Async. IO


optimized scalability of NAS storage
  • supports up to 4 parallel network paths to storage


improves high availability of NAS storage
  • automatically load balances
  • automatic failover


MOS Notes
  • 1496040.1
  • 1495104.1
  • 1495709.1


direct NFS 12c enhancements


NFSV4 AND NFSV4.1 support is new in 12c


unified protocol for MOUNT, Port Mapper, NFS and NLM
  • simplifies client code and configuration


compound RPCs
  • improves latency by reducing the round trips to NFS server


session management
  • add flow control to the NFS protocol
  • creates bounded reply cache


configuration
  • nfs_version parameter in oranfstab
  • supported values NFSv3, NFSv4 NFSv4
  • default NFSv3


Oracle intelligent storage protocol




direct nfs use cases


  • RMAN backups
  • NON OISP (without tuning) 420Mbps
  • OISP 720Mbps


Customer Case – YAHOO


data marts running 11.2.0.3 RAC and NFS storage
3 x 1GigE active-passive bonded interface
running into network throughput bottleneck with NFS
enabled dNFS over 320Mbps (without enabling 100Mbps)


%84 improvement comparing to kNFS
%19 improvement comparing to dNFSv1


Customer Case - Thompson Reuters


  • 65 TB on exadata datawarehouse
  • risk & Fraud people and company datamarts on this DWH
  • one datamart 32 TB on individual box (details below)
  • 6 x 10Gbit connection to private switch and then goes to storage BOX (all flash)
  • 4 controller and 3 direct path channel each (12 channels in total)
  • HP DL980 (80 Cores and 2Tb Memory)
  • About 2Gbps over 4 hour time period
  • Within this time period total of 10Tb IO read and write
  • Peaks as high as 5Gb pre second

1 comment:

  1. Are you trying to make cash from your visitors by running popunder advertisments?
    If so, did you ever use Propeller Ads?

    ReplyDelete