31 Oct 2012

Cool Subnetting Tricks with VLSM

A few months back, I showed you how to organize your network into smaller subnets. My post covered the details of the concept of subnetting. So if you missed that article, I would suggest taking a look at it to make sure you understand VLSM and this article in its entirety. For now, I will assume that you are already familiar with subnetting and know how to divide a network into smaller subnets.
In today’s article, we’ll subnet an already subnetted network into multiple subnets with variable subnet masks and then allocate them within our sample network.
Variable Length Subnet Mask (VLSM) is a key technology on large scalable networks. Mastering the concept of VLSM is not an easy task, but it’s well worth it. The importance of VLSM and its beneficial contribution to networking design is unquestionable. At the end of this article you will be able to understand the benefits of VLSM and describe the process of calculating VLSMs. I will use a real world example to help you understand the whole process and its beneficial effects.

Benefits of VLSM

VLSM provides the ability to subnet an already subnetted network address. The benefits that arise from this behavior include:
Efficient use of IP addresses: IP addresses are allocated according to the host space requirement of each subnet.
IP addresses are not wasted; for example, a Class C network of 192.168.10.0 and a mask of 255.255.255.224 (/27) allows you to have eight subnets, each with 32 IP addresses (30 of which could be assigned to devices). What if we had a few WAN links in our network (WAN links need only one IP address on each side, hence a total of two IP addresses per WAN link are needed).
Without VLSM that would be impossible. With VLSM we can subnet one of the subnets, 192.168.10.32, into smaller subnets with a mask of 255.255.255.252 (/30). This way we end up with eight subnets with only two available hosts each that we could use on the WAN links.
The /30 subnets created are: 192.168.10.32/30, 192.168.10.36/30, 192.168.10.40/30, 192.168.10.44/30, 192.168.10.48/30, 192.168.10.52/30, 192.168.10.56/30 192.168.10.60/30.
Support for better route summarization: VLSM supports hierarchical addressing design therefore, it can effectively support route aggregation, also called route summarization.
The latter can successfully reduce the number of routes in a routing table by representing a range of network subnets in a single summary address. For example subnets 192.168.10.0/24, 192.168.11.0/24 and 192.168.12.0/24 could all be summarized into 192.168.8.0/21.

Address Waste Without VLSM

The following diagram shows a sample internetwork which uses a network C address 192.168.10.0 (/24) subnetted into 8 equal size subnets (32 available IP addresses each) to be allocated to the various portions of the network.
This specific network consists of 3 WAN links that are allocated a subnet address range each from the pool of available subnets. Obviously 30 IP address are wasted (28 host addresses) since they are never going to be used on the WAN links.
Variable Length Subnet Mask - 1

Implementing VLSM

In order to be able to implement VLSMs in a quick and efficient way, you need to understand and memorize the IP address blocks and available hosts for various subnet masks.
Create a small table with all of this information and use it to create your VLSM network. The following table shows the block sizes used for subnetting a Class C subnet.
Variable Length Subnet Mask - 2
Having this table in front of you is very helpful. For example, if you have a subnet with 28 hosts then you can easily see from the table that you will need a block size of 32. For a subnet of 40 hosts you will need a block size of 64.

Example: Create a VLSM Network

Let us use the sample network provided above to implement VLSM. According to the number of hosts in each subnet, identify the addressing blocks required. You should end up with the following VLSM table for this Class C network 192.168.10.0/24.
Variable Length Subnet Mask - 3
Take a deep breath … we’re almost done. We have identified the necessary block sizes for our sample network.
The final step is to allocate the actual subnets to our design and construct our VLSM network. We will take into account that subnet-zero can be used in our network design, therefore the following solution will really allow us to save unnecessary addressing waste:
Variable Length Subnet Mask - 4
With VLSM we have occupied 140 addresses. Nearly half of the address space of the Class C network is saved. The address space that remains unused is available for any future expansion.
Isn’t that amazing? We have reserved a great amount of addresses for future use. Our sample network diagram is finalized as shown on the following diagram:
Variable Length Subnet Mask - 5

Final Thoughts

Variable Length Subnet Mask is an extremely important chapter in Network Design. Honestly, if you want to design and implement scalable and efficient networks, you should definitely learn how to design and implement VLSM.
It’s not that difficult once you understand the process of block sizes and the way to allocate them within your design. Don’t forget that VLSM relates directly to the subnetting process, therefore mastering the subnetting process is a prerequisite for effectively implementing VLSM. And feel free to go through my subnetting articles a couple of times to get a hang of the whole process.

By
http://www.trainsignal.com/blog/cisco-ccna-vlsm

How to Organize Your Network Into Smaller Subnets

In my last article, IP Addressing and Routing Part 1: The Invasion of IP Addresses, I presented the architecture of the IP addressing scheme. We went over the IP Network Classes and how to distinguish between them.
If you’re new to this field, I would suggest adding both Part 1 and Part 2: IP Routing Process to your reading list, since it provides some additional information that can be useful in getting a firm grasp of the subnetting concept.
In today’s article we are going to learn about the concept of subnetting and how we can use it to divide our classful network into smaller networks that can operate in separate working zones. We’ll also take a look at how we can conserve address space and save resources on process management with the use of subnetting.
I’ll use a few examples to clearly present the steps of subnetting and help you master this topic. And although at first this may seem difficult, don’t give up! All it takes is some time and practice!

What Is Subnetting?

Subnetting is the process of stealing bits from the HOST part of an IP address in order to divide the larger network into smaller sub-networks called subnets. After subnetting, we end up with NETWORK SUBNET HOST fields. We always reserve an IP address to identify the subnet and another one to identify the broadcast address within the subnet. In the following sections you will find out how all this is possible.

Why Use Subnetting?

Conservation of IP addresses: Imagine having a network of 20 hosts. Using a Class C network will waste a lot of IP addresses (254-20=234). Breaking up large networks into smaller parts would be more efficient and would conserve a great amount of addresses.
Reduced network traffic: The smaller networks created the smaller broadcast domains are formed hence less broadcast traffic on network boundaries.
Simplification: Breaking large networks into smaller ones could simplify fault troubleshooting by isolating network problems down to their specific existence.

The Subnetting Concept

You will be surprised how easy the concept of Subnetting really is. Imagine a network with a total of 256 addresses (a Class C network). One of these addresses is used to identify the network address and another one is used to identify the broadcast address on the network. Therefore, we are left with 254 addresses available for addressing hosts.
If we take all these addresses and divide them equally into 8 different subnets we still keep the total number of original addresses, but we have now split them into 8 subnets with 32 addresses in each. Each new subnet needs to dedicate 2 addresses for the subnet and broadcast address within the subnet.
The result is that we eventually come up with 8 subnets, each one possessing 30 addresses available for hosts. You can see that the total amount of addressable hosts is reduced (240 instead of 254) but better management of addressing space is gained. I’ll now use a couple of examples to help explain the process of subnetting as clearly as possible.

Subnetting a Class C Address Using the Binary Method

We will use a Class C address which takes 5 bits from the Host field for subnetting and leaves 3 bits for defining hosts as shown in figure 1 below. Having 5 bits available for defining subnets means that we can have up to 32 (2^5) different subnets.
It should be noted that in the past using subnet zero (00000—) and all-ones subnet (11111—) was not allowed. This is not true nowadays. Since Cisco IOS Software Release 12.0 the entire address space including all possible subnets is explicitly allowed.
Cisco Subnetting 1
Let’s use IP address 192.168.10.44 with subnet mask 255.255.255.248 or /29.

STEP 1: Convert to Binary

Cisco Subnetting 2

STEP 2: Calculate the Subnet Address

To calculate the Subnets IP Address you need to perform a bit-wise AND operation (1+1=1, 1+0 or 0+1 =0, 0+0=0) on the host IP address and subnet mask. The result is the subnet address in which the host is situated.
Cisco Subnetting 3

STEP 3: Find Host Range

We know already that for subnetting this Class C address we have borrowed 5 bits from the Host field. These 5 bits are used to identify the subnets. The remaining 3 bits are used for defining hosts within a particular subnet.
The Subnet address is identified by all 0 bits in the Host part of the address. The first host within the subnet is identified by all 0s and a 1. The last host is identified by all 1s and a 0. The broadcast address is the all 1s. Now, we move to the next subnet and the process is repeated the same way. The following diagram clearly illustrates this process:
Cisco Subnetting 4

STEP 4: Calculate the Total Number of Subnets and Hosts Per Subnet

Knowing the number of Subnet and Host bits we can now calculate the total number of possible subnets and the total number of hosts per subnet. We assume in our calculations that all-zeros and all-ones subnets can be used. The following diagram illustrated the calculation steps.
Cisco Subnetting 5

Subnetting a Class C Address Using the Fast Way

Now let’s see how we can subnet the same Class C address using a faster method. Let’s again use the IP address 192.168.10.44 with subnet mask 255.255.255.248 (/29). The steps to perform this task are the following:
1. Total number of subnets: Using the subnet mask 255.255.255.248, number value 248 (11111000) indicates that 5 bits are used to identify the subnet. To find the total number of subnets available simply raise 2 to the power of 5 (2^5) and you will find that the result is 32 subnets.
Note that if subnet all-zeros is not used then we are left with 31 subnets and if also all-ones subnet is not used then we finally have 30 subnets.
2. Hosts per subnet: 3 bits are left to identify the host therefore the total number of hosts per subnet is 2 to the power of 3 minus 2 (1 address for subnet address and another one for the broadcast address)(2^3-2) which equals to 6 hosts per subnet.
3. Subnets, hosts and broadcast addresses per subnet: To find the valid subnets for this specific subnet mask you have to subtract 248 from the value 256 (256-248=8) which is the first available subnet address.
Actually the first available one is the subnet-zero which we explicitly note. Next subnet address is 8+8=16, next one is 16+8=24 and this goes on until we reach value 248. The following table provides all the calculated information.
Note that our IP address (192.168.10.44) lies in subnet 192.168.10.40.
Cisco Subnetting 6

Test Your Subnetting Knowledge and Practice, Practice, Practice!

Don’t get discouraged if you didn’t understand every little detail I went over in this article. Subnetting is not really that difficult, but it does require a bit of practice.
Start with testing your knowledge of subnets and make sure you feel confident about this before you move on to designing your own subnets. But remember, if you’re on the Cisco Networking track you will have to deal with subnetting sooner or later, so grab this opportunity and start testing yourself.
Go ahead and subnet the network address 192.168.10.0 address using the subnet mask 255.255.255.192 (/26). Find the valid subnets, host ranges and broadcast addresses per subnet. If you want to double-check your answer, feel free to leave me a comment and I will provide you with the correct solution.

By
http://www.trainsignal.com/blog/simplify-routing-how-to-organize-your-network-into-smaller-subnets

23 Oct 2012

Yorkshire Pudding


4 oz (100g) Plain Flour
1 medium sized egg
Pinch of salt
½ pint (280ml) of milk (or mixture of milk and water)
2oz (50g) lard/fat or 2 tablespoons of oil

Mix the flour and salt in a basin and make a hollow in middle. Drop the egg into the hollow and stir in with a wooden spoon. Add the milk (milk and water) gradually, stirring all of the time until the flour is worked in. Add rest of liquid and beat well. The end result should have a similar consistency to single cream.

Melt the fat in cooking tin until spitting hot.(top of the oven) Can be one large tin square, rectangular, round or small tins or a bun tin. When the fat is hot enough pour in the batter just half filling small tins, patty tins or bun tins. Cook at 230C or gas mark 8. Large tins for about 30 minutes, small tins or bun tins 15 – 20 minutes.

When cooked they should turn out puffy, golden and crispy on the outside and sunken in the middle. 

By Lorraine Waddington

18 Sept 2012

EMC does not start


EMC has error when launching
The attempt to connect to http://exch01.domain.local/powershell using "KERBEROS" authentication failed: connecting to remote server failed with following error message: access is denied

1. Check all services make sure they are all started
2. Check the time and time zone, make sure it synchronized with DC

Lauch EMC, if it still does not work go to the next steps

3. From exchange server open a command prompt and type
WinRM QuickConfig
Answer "y" or yes to all prompts.
Keep running the above command until it returns a positive result.
(WinRM already set up to receive......blah, blah, blah)
4. After that returns a similar message as above, set they are set to start Default
Windows Remote Management (WS-Management)
World Wide Web Publishing Service.

5. Run IIS Manager. Click on the "Default Site" and select restart

Good luck 

10 Sept 2012

Tip: 7 Best Practices for Physical Servers Hosting Hyper-V Roles

http://technet.microsoft.com/en-us/magazine/dd744830.aspx


Before setting up a physical server to host the Hyper-V role, download, read, and understand information included in the white paper “Performance Tuning Guidelines for Windows Server 2008”. Three sections in this white paper that can have a significant impact on the performance of the physical server discuss tuning the server hardware and setting up the networking and storage subsystems. These are especially critical for Hyper-V because the hypervisor itself sits on top of the hardware layer as described earlier and controls all hardware in Windows Server 2008. The operating system itself essentially runs in a virtual machine, better known as the Parent Partition.

Here are seven best practices for physical servers hosting the Hyper-V role.

Avoid Overloading the Server
Determining the number of virtual machines that will be hosted on the Hyper-V server and the workloads they will be handling is critical. The version of the operating system that will be installed on the physical server can help in this regard, so the first “best practice” is to consider using Windows Server 2008 Datacenter x64 with Hyper-V. The Datacenter x64 edition supports up to 64 processors, 2 terabytes of physical memory, and 16 failover cluster nodes for Quick Migration scenarios and allows unlimited virtual machines to be run in Hyper-V. Selecting a Server Core installation provides added benefits, including enhanced security and lower maintenance.

Ensure High-Speed Access to Storage
For storage, consider using a storage area network (SAN) that is configured with highspeed (10,000 rpms or greater) drives (SATA or SAS) that support queued I/O and Raid 0 +1 configurations. You can use either Fibre Channel or iSCSI SAN hardware.

Install Multiple Network Interface Cards
For networking, be sure to have more than one network card installed on the physical server and dedicate one network interface to Hyper-V server administration. This means no virtual networks in Hyper-V will be configured to use this NIC. For high-workload virtual machines, you might want to dedicate a physical network adapter on the server to the virtual network the virtual machine is using. Ensure virtual machines that share a physical adapter do not oversubscribe to the physical network. Use the Reliability And Performance Monitor to establish a performance baseline for the load and then adjust NIC configurations and loads accordingly.

If you have only a single NIC in the machine that you are configuring the Hyper-V role on and you are doing the configuration remotely (say, in an RDP session) if you choose to bind the Virtual Switch Protocol to the single NIC in the machine, you will be disconnected from your session and a reconnection might not be possible until the newly created virtual network adapter has been properly configured.

Avoid Mixing Virtual Machines That Can Use Integration Services with Those That Cannot
Do not mix on the same physical server virtual machines that can take advantage of Hyper-V Integration Services with those that cannot. Virtual machines that cannot use Integration Services must use legacy network adapters to gain access to the physical network. To accommodate legacy network adapters, you might need to disable some high-end features on the network interface, which can unnecessarily limit the functionality of the synthetic devices. Additionally, using emulated devices places an extra workload on the Hyper-V server.

Configure Antivirus Software to Bypass Hyper-V Processes and Directories
If you are running antivirus software on the physical server, you might want to consider excluding the Vmms.exe and Vmswp.exe processes. Also, exclude the directories that contain the virtual machine configuration files and virtual hard disks from active scanning. An added benefit of using pass-through disks in your virtual machines is that you can use the antivirus software running on the physical server to protect that virtual machine.

Avoid Storing System Files on Drives Used for Hyper-V Storage
Do not store any system files (Pagefile.sys) on drives dedicated to storing virtual machine data.

Monitor Performance to Optimize and Manage Server Loading
When running multiple high-workload virtual machines on a Hyper-V server, ensure a proper aggregate performance baseline is obtained over a specified period of time (say, five days during normal working hours) to ensure the hardware configuration for the physical server is optimal to support the load being placed on it by the virtual machines. If adding more memory, processors, or higher performing storage is not possible, you might need to migrate the virtual machines to other Hyper-V servers.

9 Sept 2012

Raid 10 on x3650M4 with ServerRaid M5510e



It will be easier if raid setup can be done in ServerGuide but unfortunately at the time I was writing this post ServerGuide 9.23 did not work with my x3650 M4 7915.

1. Turn on the server and press F1 at the IBM startup screen.

2. Select Server Settings - Adapter and UEFI drivers then press Enter (to compile the list of drivers)


3. Select the type of Raid card which is in your server

4. You will have 2 options EFI WebBios and EFI CLI . Select option 1 EFI WebBios


5. Select your raid card and click Start

6. Select Configuration Wizard at MegaRaid Bios Config Utility screen.
7. Select New Configuration then Manual Congiguration and click Next
8. Select DriveGroup0 on the right pane (DriveGroups) and select half of physical disk on the left pane (Drives) and click Add to Array.
9. Click Accept DG. MegaRaid will display another drive group DriveGroup1

10. Select new DriveGroup1 on the right pane then select the rest hard disk on the left pane and click on Add to Array.
11. Click Accept DG. In the right pane you will have 3 drivegroups but only 2 has size displayed. The size of dirvegroup is the size of your hard disk.

12. Click Next.

13. In the Span Definition screen select each of DriveGroup in the combo box and click Add to Span.
14. Click Next.
15. If the type of Raid is 10 that means you do thing right. Click Update Size and then Accept.

16. The next screen will display Raid level of every drivegroup. In my case it is Raid0.

17. Click Next then Accept to finish

Good luck :-)

ServeRaid M5110e on x3650M4

Unfortunately at the time I am installing Windows 2008 R2 on x3650M4 (7915) IBM has  not yet updated ServeRaid M5110e on its ServerGuide 9.23. Whatever raid level I selected by following steps in ServerGuide the critical error of raid update failure displayed. It takes me a day before  realizing that I need to find a way round.
First of all the raid setting have to be done via WebRaid because ServerGuide raid setup did not work.
Secondly find out RaidServe M5110e windows driver at http://download2.boulder.ibm.com/sar/CMA/XSA/ibm_dd_sraidmr_5.2.127_windows_32-64.exe and copy it in to USB key
Lastly restart 3650M4 by Windows 2008 R2 DVD and setup as normally except had to upload ServeRaid driver from the USB key before able to see the hard disk partition.
...Go for a beer now

22 Aug 2012

Understanding where your virtual machine files are [Hyper-V]

http://blogs.msdn.com/b/virtual_pc_guy/archive/2010/03/10/understanding-where-your-virtual-machine-files-are-hyper-v.aspx

To be honest, I am surprised that I have not blogged about this before, but today I would like to talk about how virtual machine files are placed on the hard disk. 
Virtual Machine files
The first thing to know is what files are used to create a virtual machine:
  • .XML files
    • These files contain the virtual machine configuration details.  There is one of these for each virtual machine and each snapshot of a virtual machine.  They are always named with the GUID used to internally identify the virtual machine or snapshot in question.
  • .BIN files
    • This file contains the memory of a virtual machine or snapshot that is in a saved state.
  • .VSV files
    • This file contains the saved state from the devices associated with the virtual machine.
  • .VHD files
    • These are the virtual hard disk files for the virtual machine
  • .AVHD files
    • These are the differencing disk files used for virtual machine snapshots
Understanding data roots
Hyper-V has a concept of the “virtual machine data root” and the “virtual machine snapshot root”.  These are the locations where the virtual machine configuration (.XML) and saved state (.BIN & .VSV) files are stored.  For example – a virtual machine which had a virtual machine data root of “D:\Foo” and a snapshot data root of “D:\Foo” and had two snapshots would have a file structure like this:
D:\Foo
D:\Foo\Snapshots
D:\Foo\Snapshots\[Snapshot #1 GUID directory]
D:\Foo\Snapshots\[Snapshot #1 GUID].XML
D:\Foo\Snapshots\[Snapshot #2 GUID directory]
D:\Foo\Snapshots\[Snapshot #2 GUID].XML
D:\Foo\Virtual Machines
D:\Foo\Virtual Machines\[Virtual Machine GUID directory]
D:\Foo\Virtual Machines\[Virtual Machine GUID].XML
If the snapshots and the virtual machine had saved states associated with them – then the file structure would look like this:
D:\Foo
D:\Foo\Snapshots
D:\Foo\Snapshots\[Snapshot #1 GUID directory]
D:\Foo\Snapshots\[Snapshot #1 GUID directory]\[Snapshot #1 GUID].BIN
D:\Foo\Snapshots\[Snapshot #1 GUID directory]\[Snapshot #1 GUID].VSV
D:\Foo\Snapshots\[Snapshot #1 GUID].XML
D:\Foo\Snapshots\[Snapshot #2 GUID directory]
D:\Foo\Snapshots\[Snapshot #2 GUID directory]\[Snapshot #1 GUID].BIN
D:\Foo\Snapshots\[Snapshot #2 GUID directory]\[Snapshot #1 GUID].VSV
D:\Foo\Snapshots\[Snapshot #2 GUID].XML
D:\Foo\Virtual Machines
D:\Foo\Virtual Machines\[Virtual Machine GUID directory]
D:\Foo\Virtual Machines\[Virtual Machine GUID directory]\[Virtual Machine GUID].BIN
D:\Foo\Virtual Machines\[Virtual Machine GUID directory]\[Virtual Machine GUID].VSV
D:\Foo\Virtual Machines\[Virtual Machine GUID].XML
Some key things to highlight about data roots:
  • We always create a “Virtual Machines” folder under the virtual machine data root and store the virtual machine configuration files there.
  • We always create a “Snapshots” folder under the snapshot data rot and store the snapshot configuration files there.
  • We fully support multiple virtual machines having the same virtual machine and snapshot data root
Understanding VHD and AVHD locations
.VHD files can be created pretty much anywhere you want.  In Windows Server 2008 R2, .AVHD files are always created in the same location as their parent .VHD files.
Common Virtual Machine File Configuration #1 – Default Virtual Machine Data Root
A virtual machine with a default virtual machine data root is one where you created the virtual machine and accepted the default options in the new virtual machine wizard, specifically where you did not check to “Store the virtual machine in a different location” on the first page of the new virtual machine wizard:
image
In this configuration option the virtual machine data root and snapshot data root will be set to the path specified under the Hyper-V Settings in the “Virtual Machines” setting, and the virtual hard disk will be created under the path specified under the Hyper-V Settings in the “Virtual Hard Disks” setting:
image
These paths are normally set to “C:\ProgramData\Microsoft\Windows\Hyper-V” for the “Virtual Machines” setting and “C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks” for the “Virtual Hard Disks” setting.  That said – I usually change these settings to “D:\Hyper-V\Configuration Files” and “D:\Hyper-V\Virtual Hard Disks” on my systems as I find this easier to work with.
Common Virtual Machine File Configuration #2 – External Virtual Machine Data Root
If you do select to “Store the virtual machine in a different location” you will get what we call a virtual machine with an external virtual machine data root.
image
With this option we create a new folder named after the virtual machine, and set the virtual machine data root and snapshot data root to this folder.  We also default to creating the virtual hard disk in this new folder.

Common Virtual Machine File Configuration #3 – Exported / Imported virtual machine
If you export a virtual machine a virtual machine and then import it without checking the option to “Duplicate all files so the same virtual machine can be imported again”, you will end up with a virtual machine that looks like a virtual machine with an external data root – but there will be one difference.
image
Instead of having the virtual hard disks stored in the same location as the virtual machine data root – they will be stored in a “Virtual Hard Disks” folder under the virtual machine data root folder instead.
Changing a virtual machine to a default data root virtual machine
If you have an existing virtual machine that you want to change to a “default data root” configuration – the easiest way to do this is to export the virtual machine and then import it and check the option to “Duplicate all files so the same virtual machine can be imported again”.  The resulting virtual machine will be a default data root virtual machine.
Changing a virtual machine to an external data root virtual machine
If you have an existing virtual machine that you want to change to an “external data root” configuration, you have two options:
  • Spend some time scripting the import / export APIs in Hyper-V.  It is possible to do it this way – but it is not easy.
  • Move the virtual machine using System Center Virtual Machine Manager.  SCVMM will always transform a virtual machine into an external data root virtual machine in the process of moving it.
Changing the snapshot data root for a virtual machine
The only way to change the virtual machine data root for a virtual machine is by using import / export.  But the snapshot data root for a virtual machine can be changed at any time – as long as all snapshots are deleted first.  If you have deleted all existing snapshots you can change the snapshot data root by changing the “Snapshot File Location” setting for the virtual machine under the virtual machine settings user interface.

Hyper-V CPU limitation

By default Hyper-V only allows max 4 CPU core to be displayed on VM. To get the highest core available in your Hyper V host displayed on VM you simply edit count type value in VM config xml file and ...enjoy your work.










14 Aug 2012

Resume Content DB upgrade

I’m going to start this post with a couple of little issues when upgrading your SharePoint environment, but there are several great references for you


If your SharePoint 2010 upgrade is stuck and you get the following in Central Administration “database is up to date, but some sites are not completely upgraded”. Note, there is a whole lot of misinformation out there about performing/reinitializing an upgrade by using the psconfig -cmd upgrade -inplace v2v (or b2b) commands. However, this is for upgrading your Farm and if it is failing on the content databases, it will continue to fail. The below is all about resuming the content database upgrade.
 The first thing you will need to do is get the Site ID for the Database that is problematic
Get-SPContentDatabase -Identity Name_of_Database
 That should return something that looks like this…



Once you have the ID, you’ll want to execute the Update-SPContentDatabase command
upgrade-spcontentdatabase -id f7f9907c-71e8-494d-8f2b-4ce6a5b934ea


References:
 http://www.shareesblog.com/?p=560



Diagnose MissingWebPart and MissingAssembly issues from the SharePoint Health Analyzer using PowerShell


In this article I am going to focus on MissingWebPart and MissingAssembly errors. As stated in previous articles, there is no silver bullet to solving these errors in all cases, but the scripts offered here will allow you troubleshoot the errors further to find exactly where they are happening in the content database. Once you know this, you have a fighting chance of being able to solve the problem.
MissingWebPart Error
In this example, I have received the following error whilst running a Test-SPContentDatabase operation after a content database migration from SharePoint 2007 to 2010. It also appears in the SharePoint Health Analyzer under the “Configuration” category with the title “Missing server side dependencies”:
Category        : MissingWebPart
Error           : True
UpgradeBlocking : False
Message         : WebPart class [4575ceaf-0d5e-4174-a3a1-1a623faa919a] is referenced [2] times in the database [SP2010_Content], but is not installed on the current farm. Please install any feature/solution which contains this web part.
Remedy          : One or more web parts are referenced in the database [SP2010_Content], but are not installed on the current farm. Please install any feature or solution which contains these web  parts.

As you can see, the error gives you a “WebPart class” GUID, the name of the content database, and how many times it is referenced in the database, but little else. What we need to find out here is either the name of the web part or on which pages it is referenced in the database.
For this I am going to reuse the Run-SQLQuery PowerShell script that I introduced in my article on MissingSetupFile errors:
function Run-SQLQuery ($SqlServer, $SqlDatabase, $SqlQuery)
{
$SqlConnection = New-Object System.Data.SqlClient.SqlConnection
$SqlConnection.ConnectionString = "Server =" + $SqlServer + "; Database =" + $SqlDatabase + "; Integrated Security = True"
$SqlCmd = New-Object System.Data.SqlClient.SqlCommand
$SqlCmd.CommandText = $SqlQuery
$SqlCmd.Connection = $SqlConnection
$SqlAdapter = New-Object System.Data.SqlClient.SqlDataAdapter
$SqlAdapter.SelectCommand = $SqlCmd
$DataSet = New-Object System.Data.DataSet
$SqlAdapter.Fill($DataSet)
$SqlConnection.Close()
$DataSet.Tables[0]
}
Once you have loaded the function in a PowerShell console, you can run it by using the Run-SQLQuery command with the options relevant to your deployment. For [MissingWebPart] errors, you need to run a SQL SELECT query on the “AllDocs” table in the content database exhibiting the problem, joining to the “AllWebParts” table in order to find details about the missing web part. For example, you would type the following command to find details of the web part with the class ID “4575ceaf-0d5e-4174-a3a1-1a623faa919a”, as reported in the error above:
Run-SQLQuery -SqlServer "SQLSERVER" -SqlDatabase "SP2010_Content" -SqlQuery "SELECT * from AllDocs inner join AllWebParts on AllDocs.Id = AllWebParts.tp_PageUrlID where AllWebParts.tp_WebPartTypeID = '4575ceaf-0d5e-4174-a3a1-1a623faa919a'" | select Id, SiteId, DirName, LeafName, WebId, ListId, tp_ZoneID, tp_DisplayName | Format-List
Yes, it is a pretty long command, but it will produce a very useful output, as shown in this example:
Id             : 6ab5e70b-60d8-4ddf-93cb-6a93fbc410be
SiteId         : 337c5721-5050-46ce-b112-083ac52f7f26
DirName        : News/Pages
LeafName       : ArticleList.aspx
WebId          : dcc93f3e-437a-4fae-acea-bb15d5c4ea7d
ListId         : 7e13fe6c-3670-4d46-9601-832e3eb6a1e4
tp_ZoneID      : Body
tp_DisplayName :

Id             : b3fcfcd2-2f02-4fe9-93e4-9c9b5ecddf5b
SiteId         : 337c5721-5050-46ce-b112-083ac52f7f26
DirName        : Pages
LeafName       : Welcome.aspx
WebId          : 2ae0de59-a008-4244-aa66-d8f76c79f1ad
ListId         : d8f083f0-16b9-43d0-9aaf-4e9fffecd6cc
tp_ZoneID      : RightColumnZone
tp_DisplayName :

This tells us that the web part has been found on two pages (the references mentioned in the MissingWebPart error). SiteId tells us the site collection and WebId the site where the pages are located. We also have a DirName showing the relative path and the page name itself against the LeafName property. If you’re lucky, you might get the display name of the web part against the tp_DisplayName property, but if not, you should at least be able to tell which zone the web part has been added to by looking at the tp_ZoneID property.
Easily the best way of resolving these issues is to do as the error suggests and install the missing feature or solution containing the web part, but if this is not possible or feasible to do in your scenario, we can discover the site collection URL from the GUIDs using PowerShell and then remove the offending web parts from the pages specified.
To find the site collection URL using the information output from the query, type the following command:
$site = Get-SPSite -Limit all | where {$_.Id -eq "337c5721-5050-46ce-b112-083ac52f7f26"}
$site.Url
One you have the site collection URL, you can use the relative path specified by the DirName property to find the location of the file. To remove the web part from the page, type the page URL in the browser and add ?contents=1 to the end of it. For example, to open the web part maintenance page for the ArticleList.aspx page specified in the output, type the following URL in the browser:
http://portal/news/pages/articlelist.aspx?contents=1
You can then highlight the offending web part (normally called ErrorWebPart for MissingWebPart errors) by ticking the box and clicking Delete. The screenshot below shows a web part maintenance page to give you an idea of the UI, but not an example of an ErrorWebPart as I had already removed them!
image
Note: If you remove an ErrorWebPart from a publishing page with versioning switched on, you may have to delete all earlier versions of the page before the error disappears from the SharePoint Health Analyzer or Test-SPContentDatabase report. This is because the web part will still be referenced from these versions, even though you removed it from the currently published page.
MissingAssembly Error
MissingAssembly errors look similar to this one:
Category        : MissingAssembly
Error           : True
UpgradeBlocking : False
Message         : Assembly [PAC.SharePoint.Tagging, Version=1.0.0.0, Culture=neutral, PublicKeyToken=b504d4b6c1e1a6e5] is referenced in the database [SP2010_Content], but is not installed on the current farm. Please install any feature/solution which contains this assembly.
Remedy          : One or more assemblies are referenced in the database [SP2010_Content], but are not installed on the current farm. Please install any feature or solution which contains these assemblies.

I normally find MissingAssembly errors appear as the result of an event receiver, which is still registered on a list or library but part of a feature/solution no longer present on the farm.
In most cases, you may be able to look at the assembly name reported in this error and know what the problem is straight away. As before, the best way of resolving this is to reinstall the missing solution file. However, if you are not able to install the solution (e.g., maybe it only works in SharePoint 2007 and not 2010), then you may want to find the lists where the event receiver is installed and either remove the event receiver from the lists or delete the lists themselves.
To troubleshoot this issue we can re-use the Run-SQLQuery function used to help find missing web parts above. The table we need to look at this time though is called “EventReceivers”. For example, you would type the following command to find details of the assembly called “PAC.SharePoint.Tagging, Version=1.0.0.0, Culture=neutral, PublicKeyToken=b504d4b6c1e1a6e5”, as reported in the error above:
Run-SQLQuery -SqlServer "SQLSERVER" -SqlDatabase "SP2010_Content" -SqlQuery "SELECT * from EventReceivers where Assembly = ‘PAC.SharePoint.Tagging, Version=1.0.0.0, Culture=neutral, PublicKeyToken=b504d4b6c1e1a6e5'" | select Id, Name, SiteId, WebId, HostId, HostType | Format-List
This will produce an output similar to the following:
Id       : 657a472f-e51d-428c-ab98-502358d87612
Name     :
SiteId   : 337c5721-5050-46ce-b112-083ac52f7f26
WebId    : 2ae0de59-a008-4244-aa66-d8f76c79f1ad
HostId   : 09308020-45a8-41e4-bbc0-7c8d8cd54132
HostType : 2

Id       : 0f660612-6be0-401e-aa1d-0ede7a9af8da
Name     :
SiteId   : 337c5721-5050-46ce-b112-083ac52f7f26
WebId    : 2ae0de59-a008-4244-aa66-d8f76c79f1ad
HostId   : 09308020-45a8-41e4-bbc0-7c8d8cd54132
HostType : 2

As with the MissingWebPart error before, we can use these GUIDs to get the site collection and site hosting the list with the missing event receiver, as follows:
$site = Get-SPSite -Limit all | where {$_.Id -eq "337c5721-5050-46ce-b112-083ac52f7f26"}
$web = $site | Get-SPWeb -Limit all | where {$_.Id -eq "2ae0de59-a008-4244-aa66-d8f76c79f1ad"}
$web.Url
The HostId property is the GUID of the object containing the event receiver. The HostType is the object type – in this case, HostType “2” means the event receiver host is a list. You can look at the other host types by checking this article on MSDN: http://msdn.microsoft.com/en-us/library/ee394866(v=prot.13).aspx.
Now we know the GUID refers to a list, we can get it using PowerShell with this command:
$list = $web.Lists | where {$_.Id -eq "09308020-45a8-41e4-bbc0-7c8d8cd54132"}
To remove the list completely, type the following command:
$list.Delete()
To keep the list intact and just remove the offending event receiver, copy the Id property from the Run-SQLQuery output into this command:
$er = $list.EventReceivers | where {$_.Id -eq "657a472f-e51d-428c-ab98-502358d87612"}
$er.Delete()
If you do decide to delete the list completely, ensure you also remove it from the site Recycle Bin and Site Collection Recycle Bin to ensure the file is removed from the content database. If not, the error may not disappear from the Health Analyzer or Test-SPContentDatabase operation.

http://get-spscripts.com/2011/08/diagnose-missingwebpart-and.html

Removing features from a content database in SharePoint 2010 using PowerShell

The great thing about the Health Analyzer in SharePoint 2010 is that it will report on a number of potential issues with the server farm, which may cause a problem later whilst applying a cumulative update or service pack. Resolving these issues in advance will help to prevent an update failing when you run the SharePoint Configuration Wizard.
One of these problems may occur when a solution is removed from the farm before the corresponding features were deactivated from site collections and sites. The Health Analyzer will place this issue in the “Configuration” category with the title “Missing server side dependencies”.
Missing server side dependencies
The error message reported will look similar to this one:
[MissingFeature] Database [SharePoint_Content_Portal] has reference(s) to a missing feature: Id = [8096285f-1463-42c7-82b7-f745e5bacf29], Name = [My Feature], Description = [], Install Location = [Test-MyFeature]. The feature with Id 8096285f-1463-42c7-82b7-f745e5bacf29 is referenced in the database [SharePoint_Content_Portal], but is not installed on the current farm. The missing feature may cause upgrade to fail. Please install any solution which contains the feature and restart upgrade if necessary.
As shown above, this message reports a content database name (SharePoint_Content_Portal) and feature ID (8096285f-1463-42c7-82b7-f745e5bacf29), but not the sites or site collections where the feature exists. In addition to this, even if you did know where the feature was activated, it will not appear anywhere in the UI for you to deactivate because the solution has been removed from the farm.
The following PowerShell script will interrogate a specified content database and feature ID and do two things:
  1. Produce a report in the PowerShell console showing which sites or site collections contain the offending feature.
  2. Forcibly deactivate the feature from the applicable sites or site collections.
Note: Whilst this article applies specifically to the scenario of deactivating features from removed solutions reported by the Health Analyzer, I have decided to write the script so that it deactivates any specified feature from sites and site collections – not just those missing from the farm. This allows the script to be used in other scenarios, too.
To use the script, run these functions in a PowerShell console with the SharePoint 2010 add-ons loaded:
function Remove-SPFeatureFromContentDB($ContentDb, $FeatureId, [switch]$ReportOnly)
{
    $db = Get-SPDatabase | where { $_.Name -eq $ContentDb }
    [bool]$report = $false
    if ($ReportOnly) { $report = $true }
   
    $db.Sites | ForEach-Object {
       
        Remove-SPFeature -obj $_ -objName "site collection" -featId $FeatureId -report $report
               
        $_ | Get-SPWeb -Limit all | ForEach-Object {
           
            Remove-SPFeature -obj $_ -objName "site" -featId $FeatureId -report $report
        }
    }
}
function Remove-SPFeature($obj, $objName, $featId, [bool]$report)
{
    $feature = $obj.Features[$featId]
   
    if ($feature -ne $null) {
        if ($report) {
            write-host "Feature found in" $objName ":" $obj.Url -foregroundcolor Red
        }
        else
        {
            try {
                $obj.Features.Remove($feature.DefinitionId, $true)
                write-host "Feature successfully removed from" $objName ":" $obj.Url -foregroundcolor Red
            }
            catch {
                write-host "There has been an error trying to remove the feature:" $_
            }
        }
    }
    else {
        #write-host "Feature ID specified does not exist in" $objName ":" $obj.Url
    }
}
You now have two options for using these functions. If you just want to produce a report in the console showing which sites and site collections contain the feature, type the following (note the ReportOnly switch on the end):
Remove-SPFeatureFromContentDB -ContentDB "SharePoint_Content_Portal" -FeatureId "8096285f-1463-42c7-82b7-f745e5bacf29" –ReportOnly
This command will step through all sites and site collections and display the following message whenever it finds the feature specified:
Feature found in site : http://portal/site
If you want to go ahead and remove the feature from all sites and site collections in the content database, type the same command without the ReportOnly switch on the end:
Remove-SPFeatureFromContentDB -ContentDB "SharePoint_Content_Portal" -FeatureId "8096285f-1463-42c7-82b7-f745e5bacf29"
Running this command will step through all sites and site collections, remove the feature specified, and display the following output:
Feature successfully removed from site : http://portal/site
You should now be able to reanalyse the “Missing server side dependencies” issue in the Health Analyzer to clear the problem (providing there are no other issues reported under that title, of course!).
http://get-spscripts.com/2011/06/removing-features-from-content-database.html.

Total Pageviews