Tag Archives: virtualization

Screen resolution scaling has stopped working after Horizon View agent update

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Another inconvenience that I noticed during the update process from VMware Horizon View 6.1.1 to 6.2 was, that the automatic screen resizing stopped working. When I connected to a desktop pool with the VMware Horizon client, I only got the screen resolution of the VM (the resolution that is used when connecting to the VM with the vSphere console)), not 1920×1200 as expected. This issue only occured with PCoIP, not with RDP. I had this issue with a static desktop and a dynamic desktop pool, and it occurred after updating the Horizon View agent. The resolution scaling worked with a Windows 2012 R2 RDS host, when I connected to a RDS with PCoIP.

VMware KB1018158 (Configuring PCoIP for use with View Manager) did not solved the problem. I checked the VMX version, the video RAM config etc. Nothing has changed, everything was configured as expected. At this point it was clear to me, that this must be an issue with the Horizon View agent. I took some snapshots and tried to reinstall the Horizon View agent. I removed the Horizon View agent and the VMware tools from one of my static desktops. After a reboot, I installed the VMware tools and then the Horizon agent. To my surprise, this first attempt has solved the problem. I tried the same with my second static desktop pool VM and with the master VM of my dynamic desktop pool (don’t forget to recompose the VMs…). This workaround has fixed the problem in each case.

I don’t know if this is a bug. I haven’t found any hints in the VMware Community forum or blogs. Maybe someone knows the answer.

VMware Horizon View agent update on RDS host fails with “Internal Error 25030”

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

I’m running a small VMware Horizon View environment in my lab. Nothing fancy, but all you need to show what Horizon View can do for you. This environment includes a Windows Server 2012 R2 RDS host. During the update process from Horizon View 6.1.1 to 6.2, I had to update the View agent on this RDS host. This update installation failed with an “Internal Error 25030”, followed by a rollback. Fortunately I had a snapshot, so I went back to the previous state and tried the update again. This attempt also went awry.

To make a long story short: Read the fscking release notes! This quote is taken from the Horizon View 6.2 release notes:

When you upgrade View Agent 6.1.1 to View Agent 6.2 on an RDS host running on Windows Server 2012 or 2012 R2, the upgrade fails with an “Internal Error 25030” message.
Workaround: Uninstall View Agent 6.1.1, restart the RDS host, and install View Agent 6.2.

And this is not the first time that this error occurred. I found this quote in the the Horizon View 6.1.1 release notes:

When you upgrade View Agent 6.1 to View Agent 6.1.1 on an RDS host running on Windows Server 2012 or 2012 R2, the upgrade fails with an “Internal Error 25030” message.
Workaround: Uninstall View Agent 6.1, restart the RDS host, and install View Agent 6.1.1

If you take a closer look at these two statements, you might notice some similarities… But I do not want to be spiteful. The workaround did the trick. Simply uninstall the View agent (if it’s still installed after the rollback… that was not the case with me), reboot and reinstall the View agent.

PernixData Architect Software

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

With the general availability of PernixData FVP 3.1, PernixData released the first version of PernixData Architect.

One of the biggest problems today is, that management tools are often focused on deployment and monitoring of applications or infrastructure. This doesn’t lead to a holistic view over applications and related data center infrastructure. You have to monitor at several points within the application stack and even then, you won’t get a holistic view. Without proper information, you can’t make proper decisions. At this point, PernixData Architect comes into play.

PernixData Architect is a software platform and supports the complete IT life cycle from design and deployment over operation and optimization. It supports the decision making process with data gathering and big data analytics. PernixData Architect continuously generates information and recommendations based on gathered data from VMs, storage devices, vCenter, network etc. This information pool can analysed with big data techniques. Data are gathered, data is set into context (this is what information is) and information are linked and combined with recommendations. Here are some examples what PernixData Architect can do for you (Source)

  • Descriptive Analytics – Identify and profile the top 10 VMs on latency, throughput and IOPS.
  • Predictive Analytics – Calculate server-side resources needed to run a VM in Write Through versus Write Back mode, ensuring optimal hardware is allocated before a problem arises.
  • Prescriptive Analytics – Recommend ideal server-side resources based on application patterns.

PernixData Architect is a software-only solution and can deployed with our without PernixData FVP. Without FVP, Architect can be used as a monitoring tool and gives you visibility, management and recommendations. Architect works with any server and storage platform that is compatible with VMware vSphere!

I’ve installed the latest PernixData FVP 3.1 release in my lab and enabled the 30 days trial period for PernixData Architect. You can access Architect through the web UI.

prnx_architect_1

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

As you can see, I have two clusters in my lab and both are accelerated using PernixData FVP. One cluster uses Distributed Fault Tolerant Memory (DFTM), the other cluster uses SSDs as acceleration ressources. If Architect is enabled, FVP doesn’t display any stats and refers to the Architect UI. Below a screenshot of the summary screen which gives you a good overview at the first glance.

prnx_architect_2

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Architect includes much more stats than FVP.

prnx_architect_3

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

On the “Intelligence” page, you get values for the working set for each ESXi host in the cluster. This is an important value for the right sizing of your acceleration ressources.

prnx_architect_4

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

As mentioned, PernixData Architect uses the gathered data to give you recommendations in realtime. Even in my lab cluster,  there are things to improve. ;)

prnx_architect_5

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

This is only a short overview about PernixData Architect. But you might see now what insight architect can give you. If you are curious to see what PernixData FVP and Architect can do for you, you can simply install both products as part of a proof-of-concept and test them for 30 days. Even if you don’t want to install FVP, Architect can used without FVP. And even FVP can used without acceleration ressources in a monitoring mode.

Using VCSA as remote syslog – Don’t forget the log rotation!

This posting is ~8 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.
Important note: It seems that vCenter Server Appliance updates revert the changes. Please check the settings after each update!

The VMware vCenter Server Appliance (VCSA) can act as a remote syslog destition for ESXi hosts. This is very handy for troubleshooting and I really recommend to use this feature.  But VMware ESXi hosts can be really chatty and therefore it’s a good idea to keep an eye on the free disk space of the VCSA.

Yesterday, a colleague had an interesting support case. A customer reported that his Veeam Backup & Replication jobs failed and that he was unable to login to the vCenter with the vSphere Client and vSphere Web Client. My colleague checked the VCSA VM and noticed that the VPXD failed to start (“Waiting for vpxd to initialize: ….failed”). Together we checked the appliance and the log files. The vpxd.log (/var/log/vmware/vpx) was updated weeks ago, but the last entry was interesting: No space left on device. But there was free disk space on /storage/log. I immediately checked the inode count with df -i and there it was: No free inodes. Why is this a problem? Each name entry in the file system consumes an inode. If there are no free inodes, no new directories and files can be created. The error message is the same as for missing disk space. Something had to have created a lot of files on /storage/log. Because /var/log/vmware is a symbolic linkt to /storage/log/vmware, it had to be something on the /storage/log partition. We checked the remote syslog location under /storage/log/remote and found gigabytes and an incredible number of logs. After removing the logs, the VPXD was able to start and the inode count was on a normal level.

But why were there so many logs? We checked the logrotate config and found a faulty config for the remote syslog files. Instead of rotating logs and remove old ones, this config rotated all logs every day and potentiated the number of logs. Please note that there is no logrotate config to rotate remote syslog files by default! This one was added manually.

This is the default config for the remote syslog-collector of the VCSA:

destination log_remote {
            file("/var/log/remote/$HOST_FROM/$YEAR-$MONTH/messages-$YEAR-$MONTH-$DAY"
            create_dirs(yes) frac-digits(3)
            template("$ISODATE $PROGRAM $MSGONLY\n")
            template_escape(no)
            );
};

As you can see, with these settings a folder for each host and each month is created. According to this VMTN posting, we changed the syslog-collector config a bit:

destination log_remote {
            file("/var/log/remote/$HOST_FROM/messages"
            create_dirs(yes) frac-digits(3)
            template("$ISODATE $PROGRAM $MSGONLY\n")
            template_escape(no)
            );
};

With this settings, only a single file per host is created. We made also a change to /etc/logrotate.d/syslog and added this at the end:

/var/log/remote/*/messages {
  daily
  compress
  delaycompress
  rotate 30
  postrotate
    /etc/init.d/syslog-collector reload > /dev/null
  endscript
}

With this configuration 30 log files will be preserved. The number of log files or how often log rotation should happen (weekly or daily) can easily be adjusted. But these settings should be sufficient for small environments.

It’s important to understand that the VCSA has different disks and that the disks are mountend to different mount points within the root filesystem. This is from a vSphere 5.5 VCSA:

vcsa1:~ # mount
/dev/sda3 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,mode=1777)
devpts on /dev/pts type devpts (rw,mode=0620,gid=5)
/dev/sda1 on /boot type ext3 (rw,noexec,nosuid,nodev,noacl)
/dev/sdb1 on /storage/core type ext3 (rw,nosuid,nodev)
/dev/sdb2 on /storage/log type ext3 (rw,nosuid,nodev)
/dev/sdb3 on /storage/db type ext3 (rw,nosuid,nodev)

/var/log/vmware and /var/log/remote are links to /storage/log/vmware and /storage/log/remote. Make sure that there is always enough free diskspace on ALL disks! I also want to highlight VMware KB2092127 (After upgrading to vCenter Server Appliance 5.5 Update 2, pg_log file reports this error: WARNING: there is already a transaction in progress). This error hit me a couple of times…

HP offers 1TB StoreOnce VSA for free

This posting is ~9 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

A free StoreOnce VSA, like the well known 1 TB StoreVirtual VSA? That would be too cool to be real. But it is real! Since February, HP offers a free 1 TB version of their StoreOnce VSA. I totally missed this announcement, but thanks to Calvin Zito I noticed it today:

The link leads to another blog post from Ashwin Shetty (Can you protect your data for free? Introducing the new free 1TB StoreOnce VSA), in which he provides more information about the free 1 TB StoreOnce VSA.

HP StoreOnce VSA

HP StoreOnce VSA runs with the same software as the hardware-based StoreOnce appliances, but it’s delivered as a VM. You can run the VM on top of VMware ESXi, Microsoft Hyper-V or KVM. Beside the free 1 TB license, the StoreOnce VSA can purchased with 4 TB, 10 TB or 50 TB capacity (usable, non-deduplicated). In contrast to the hardware-based appliances, the StoreOnce VSA comes with licenses for replication and StoreOnce Catalyst. This makes the StoreOnce VSA a perfect fit for remote and branch offices. You can quickly deploy the StoreOnce VSA and replicate the backuped data to the central datacenter. But you can also deploy the VSA with the 4 TB, 10 TB or 50 TB license in your central datacenter and use it as a replication target for StoreOnce VSAs in the remote and branch offices (the replication target needs the replication license). A single VSA can act as replication target for up to 8 StoreOnce VSA and/ or StoreOnce appliances. You can scale the free 1 TB license with license upgrades to 4 TB, 10 TB and 50 TB. The StoreOnce VSA supports Catalyst, VTL (iSCSI) and as NAS (CIFS or NFS) backup targets. Take a look into the QuickSpecs for more information. I also recommend to read the two blog posts from Ashwin Shetty on Around the Storage Block:

Last year I’ve published several posts about the StoreOnce VSA. I recommend to download the free 1 TB StoreOnce VSA and to play with it. Some of my blog posts should help you get started.

Top vBlog 2015: vcloudnine.de placed on #133

This posting is ~9 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

What a great show by Eric Siebert, David Davis, Simon Seagrave and their special guests Scott Davis from Infinio and John Troyer from TechReckoning! If you missed it, watch the recording!

First, I want to thank Eric for his work. If you read tweets like these, you will get a bad conscience.

This is the seventh year that Eric has organized and conducted the annual Top vBlog contest. He put so much work into this contest and this should be be recognized. I also like to thank the sponsor Infinio for supporting this contest.

2015 was the second year in which I partaken the Top vBlog contest, but vcloudnine.de was on the voting list for the fist time. I started this blog in 2014 so I was on the “Newcomer” list of the contest. I’m always trying to create valuable content. This isn’t easy and often a draft is thrown to trash. I hope vcloudnine.de was chosen because of valuable content and not because voters like me. ;) This year’s Top vBlog poll brought us a lot changes. Eric has leaked some details in a blog post short before the announcement:

  • 60% more votes than 2014
  • 30% more blogs on the voting list
  • 7 changes in the top 10
  • 4 blogs in the top 25 that were not in there last year
  • 2 blogs in the top 25 that were newcomers this year
  • 1 blog new to the top 10

Congratulations to…

“Out of competition”: Duncan Epping (VCDX #007) and yellow-bricks.com for “defending” 1st place. Does anyone doubt it? Not really, right? ;) Congrats Duncan!

I am particularly happy for Derek Seaman (VCDX #125). His blog is a gold mine of content and he’s generating more and more (read his vSphere 6.0 series). Congrats Derek, #7 is totally deserved!

Congrats to Melissa Palmer for winning the “Best new blog” category. Keep on blogging, Melissa!

Congrats to Chris Wahl (VCDX #104) for winning the “Best indipendant blogger” category. Reading his blog is always a pleasure!

Also well deserved: Brian Madden has won the “Best VDI blog” category. His blog is an awesome resource if you deal with VDI!<in/p>

Honestly: That William Lam has won in the category “Best scriping blog” and Cormac Hogan in the “Best storage blog” category was no suprise for me. Totally deserved, guys!

I am very happy to see that some bloggers that I have on my reading list, ranked up in the list. You can find the results of the Top vBlog 2015 contest here. Congrats to all participant and thanks again to Eric Siebert!

To make the long story short…

I’m happy and disappointed at the same time. vcloudnine.de landed on place 133. Not the worst placement for a new blog. But I have missed my personal goal to be placed under the top 100. I’d like to thank all, that have voted for vcloudnine.de. This is a great motivation to work harder and to create more valuable content. Thank you all!

Tiering? Caching? Why it’s important to differ between them.

This posting is ~9 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

Some days ago I talked to a colleague from our sales team and we discussed different solutions for a customer. I will spare you the details, but we discussed different solutions and we came across PernixData FVP, HP 3PAR Adaptive OptimizationHP 3PAR Adaptive Flash Cache and DataCore SANsymphony-V. And then the question of all questions came up: “What is the difference?”.

Simplify, then add Lightness

Lets talk about tiering. To make it simple: Tiering moves a block from one tier to another, depending on how often a block is accessed in a specific time. A tier is a class of storage with specific characteristics, for example ultra-fast flash, enterprise-grade SAS drives or even nearline drives. Characteristics can be the drive type, the used RAID level or a combination of characteristics. A 3-tier storage design can consist of only one drive type, but they can be organized in different RAID levels. Tier 1 can be RAID 1 and tier 3 can be RAID 6, but all tiers use enterprise-grade 15k SAS drives. But you can also mix drive types and RAID levels, for example tier 1 with flash, tier 2 with 15k SAS in a RAID 5 and tier 3 with SAS-NL and RAID 6. Each time a block is accessed, the block “heats up”. If it’s hot enough, it is moved one tier up. If it’s less often accessed, the block “cools down” and at a specific point, the block is moved a tier down. If a tier is full, colder blocks will to be moved down and hotter block have to be moved up. It’s a bit simplified, but products like DataCore SANsymphony-V with Auto-Tiering or HP 3PAR Adaptive Optimization are working this way.

Lets talk about caching. With caching a block is only copied to a faster region, which can be flash or even DRAM. The original block isn’t moved, only a copy of the accessed block is copied to a faster medium. If this block is accessed, the data is served from the faster medium. This also works for write I/O. If a block is written, the data is written to the faster medium and will be moved later to the underlying, slower medium. You can’t store block copies until infinity, so less accessed blocks have to be removed from cache if they are not accessed, or if the cache fills up. Examples for caching solutions are PernixData FVP, HP 3PAR Adaptive Flash Cache or NetApp Flash Pool (and also Flash Cache). I lead storage controller cache explicitly not appear in this list. All of the listed caching technologies (except NetApp Flash Cache) can do write-back caching. I wouldn’t recommend read-cache only solutions like VMware vSphere Flash Read Cache, except two situations: Your workload is focused on read I/O and/ or you already own a vSphere Enterprise Plus license, and you do not want to spend extra money.

Tiering or caching? What to choose?

Well… it depends. What is the main goal when using these techniques? Accelerate workloads and making best use of scarce and expensive storage (commonly flash storage).

Regardless of the workload, tiering will need some time to let the often accessed blocks heat up. Some vendors may anticipate this partially by writing data always to the fastest tier. But I don’t think that this is what I would call efficient. One benefit of tiering is, that you can have more then two tiers. You can have a small flash tier, a bigger SAS tier and a really big SAS-NL tier. Usually you will see a 10% flash / 40% SAS / 50% SAS-NL distribution. But as I also mentioned: You don’t have to use flash in a tiered storage design. That’s a plus. On the downside tiering can make mirrored storage designs complex. Heat maps aren’t mirrored between storage systems. If you failover your primary storage, all blocks need to be heaten up again. I know that vendors are working on that. HP 3PAR and DataCore SANsymphony-V currently have a “performance problem” after a failover. It’s only fair to mention it. Here are two examples of products I know well and both offer tiering: In a HP 3PAR Adaptive Optimization configuration, data is always written to the tier, from which the virtual volume was provisioned. This explains the best practice to provision new virtual volumes from the middle tier (Tier 1 CPG). DataCore SANsymphony-V uses the performance class in the storage profile of a virtual disk to determine where data should be written. Depending on the performance class, data is written to the highest available tier (tier affinity is taken into account). Don’t get confused with the tier numbering: Some vendors use tier 0 as the highest tier, others may start counting at tier 1.

Caching is more “spontaneous”. New blocks are written to the cache (usually flash storage, but it can also be DRAM). If a block is read from disks, it’s placed in the cache. Depending on the cache size, you can hold up a lot data. You can lose the cache, but you can’t lose the data ins this case. The cache only holds block copies (okay, okay, written blocks shouldn’t be acknowledged until they are in a second cache/ hose/ $WHATEVER). If the cache is gone, it’s relatively quickly filled up again. You usually can’t have more then two “tiers”. You can have flash and you can have rotating rust. Exception: PernixData FVP can also use host memory. I would call this as an additional half tier. ;) Nutanix uses a tiered storage desing in ther hyper-converged platform: Flash storage is used as read/ write cache, cost effective SATA drives are used to store the data. Caching is great if you have unpredictable workloads. Another interesting point: You can cache at different places in the stack. Take a look at PernixData FVP and HP 3PAR Adaptive Flash Cache. PernixData FVP is sitting next to the hypervisor kernel. HP 3PAR AFC is working at the storage controller level. FVP is awesome to accelerate VM workloads, but what if I have physical database servers? At this point, HP 3PAR AFC can play to its advantages. Because you usually have only two “tiers”, you will need more flash storage as compared to a tiered storage design. Especially then, if you mix flash and SAS-NL/ SATA.

Final words

Is there a rule when to use caching and when to use tiering? I don’t think so. You may use the workload as an indicator. If it’s more predictable you should take a closer look at a tiered storage design. In particular, if the customer wants to separate data from different classes. If you have more to do with unpredictable workloads, take a closer look at caching. There is no law that prevents combining caching and tiering. At the end, the customer requirements are the key. Do the math. Sometimes caching can outperform tiering from the cost perspective, especially if you mix flash and SAS-NL/ SATA in the right proportion.

My first impressions about PernixData FVP 2.5

This posting is ~9 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

On February 25, 2015 PernixData released the latest version of PernixData FVP. Even if it’s only a .5 release, FVP 2.5 adds some really cool features and improvements. New features are:

  • Distributed Fault Tolerant Memory-Z (DFTM-Z)
  • Intelligent I/O profiling
  • Role-based access control (RBAC), and
  • Network acceleration for NFS datastores

Distributed Fault Tolerant Memory-Z (DFTM-Z)

FVP 2.0 introduced support for server side memory as an acceleration resources. With this it was possible to use server side memroy to accelerate VM I/O operations. Server side memory is faster then flash, but also more expensive. With FVP 2.5, the support for adaptive memory compression. was added. DFTM-Z provides a more efficient use of the expensive resource “server side memory”.  Some of you may think “Oh no, compression! This will only cost performance!”. I don’t think that this is fair. ;) The PernixData engineers are focused on performance and I think that they haven’t during the development of DFTM-Z. DFTM-Z is enabled on hosts that use at least 20 GB memory for FVP. With increasing memory used for FVP, the area used for compression in the memory is also increased. So not the whole memory area used for acceleration is compressed, it’s only a part of it. With 20 GB contributing the FVP cluster, the compressed memory region is 4 GB. With more than 160 GB, the region is increased to 32 GB.

Intelligent I/O profiling

A VM usually has a specific I/O profile. Sometimes this I/O profile changes quickly, e. g. when doing backups (large sequential I/Os). With intelligent I/O profiling, such workloads can now be bypassed. This doesn’t disable acceleration! The active FVP footprint of the VM remains active and is used to accelerate I/O. The intelligent I/O profiling can be enabled on a per-VM basis using PowerShell.

Role-based access control (RBAC)

The access to FVP can now be controlled with a role-based model. For this, three different roles can be used.

  • Read and Write – View and change configuration, view performance charts
  • Read-Only – View configuration and performance charts only
  • No Access – no access

vCenter users with administrator permission have read/ write access to FVP. Users without administrator permission have only read-only access. All other users have no access to FVP.

Network acceleration for NFS datastores

In the past it was not possible to use the VM footprint, the “hot data”, after a vMotion, if the VM was stored in a NFS datastore. Now this VM footprint can used for read I/O over the network.

The update process

The update from FVP 2.0 to 2.5 is really easy:

  1. Transition the VMs to write through mode
  2. Update the FVP Management server
  3. Remove host extension on the hosts
  4. Install the new host extension on the hosts
  5. Enable vSphere Plugin (C# or Web Client)
  6. Transition the VMs to write back mode

I have performed this update in my lab, and the process went smooth. Be sure to take a look into the upgrade guide. Sometimes there are interesting things in it. ;)

Overall, I’m still totally convinced of PernixData and I hope to place it in a customer project soon.

vCenter Server Appliance: Troubleshooting full database partition

This posting is ~9 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

A customer of mine had within 6 months twice a full database partition on a VMware vCenter Server Appliance. After the first outage, the customer increased the size of the partition which is mounted to /storage/db. Some months later, some days ago, the vCSA became unresponsive again. Again because of a filled up database partition. The customer increased the size of the database partition again  (~ 200 GB!!) and today I had time to take a look at this nasty vCSA.

The situation

vcsa_overview

Patrick Terlisten/ www.vcloudnine.de/ Creative Commons CC0

Within 2 days, the storage usage of the databse increased from 75% to 77%. First, I checked the size of the database:

vcsa:/opt/vmware/vpostgres/current/bin # /opt/vmware/vpostgres/current/bin/psql -h localhost -U vc VCDB
psql.bin (9.0.17)
Type "help" for help.

VCDB=> SELECT pg_database.datname, pg_size_pretty(pg_database_size(pg_database.datname)) AS size FROM pg_database;
  datname  |  size
-----------+---------
 template1 | 5353 kB
 template0 | 5345 kB
 postgres  | 5449 kB
 VCDB      | 2007 MB
(4 rows)

VCDB=>

 As you can see, the database had only 2 GB. The pg_log directory was more interesting:

vcsa:/storage/db/vpostgres # du -shc /storage/db/vpostgres/*
4.0K    /storage/db/vpostgres/PG_VERSION
2.0G    /storage/db/vpostgres/base
704K    /storage/db/vpostgres/global
47M     /storage/db/vpostgres/pg_clog
4.0K    /storage/db/vpostgres/pg_hba.conf
4.0K    /storage/db/vpostgres/pg_ident.conf
141G    /storage/db/vpostgres/pg_log
252K    /storage/db/vpostgres/pg_multixact
12K     /storage/db/vpostgres/pg_notify
324K    /storage/db/vpostgres/pg_stat_tmp
20K     /storage/db/vpostgres/pg_subtrans
4.0K    /storage/db/vpostgres/pg_tblspc
4.0K    /storage/db/vpostgres/pg_twophase
81M     /storage/db/vpostgres/pg_xlog
20K     /storage/db/vpostgres/postgresql.conf
4.0K    /storage/db/vpostgres/postmaster.opts
4.0K    /storage/db/vpostgres/postmaster.pid
0       /storage/db/vpostgres/serverlog
143G    total

 The directory was full with log files. The log files containted only one message:

vcsa:/storage/db/vpostgres/pg_log # more postgresql-2015-03-04_090525.log
 123462 tm:2015-03-04 09:05:25.488 UTC db:VCDB pid:1527 WARNING:  there is already a transaction in progress

The solution

This led me to VMware KB2092127 (After upgrading to vCenter Server Appliance 5.5 Update 2, pg_log file reports this error: WARNING: there is already a transaction in progress). And yes, this appliance was upgraded to U2 with high probability. The solution is described in KB2092127, and is really easy to implement. Please note that this is only a workaround. There’s currently no solution, as mentioned in the article.

Top vBlog 2015 Contest has started

This posting is ~9 years years old. You should keep this in mind. IT is a short living business. This information might be outdated.

If you are a frequent reader of virtualization blogs, then you may have heard about the vLaunchPad. It lists hundreds of VMware & virtualization blogs, as well as links to resources and other material. The vLaunchPad is managed by Eric Siebert (@ericsiebertvsphere-land.com) and he organizes year for year the annual Top vBlog voting contest. This year the Top vBlog contest is sponsored by Infinio.

In the 2014 voting my “old” blog was voted on place 292 of 320. I should mention that blazilla.de had only german-language content. In a community, where english is the predominating content language, this result may not surprise. If you are interested in last year’s results, you can find them here. In 2014 I have started vcloudnine.de, but I didn’t nominated it for the 2014 voting. Instead, I nominated blazilla.de for the Top vBlog 2014 contest. This year the tables turned and I have nominated vcloudnine.de for the categories:

  • Best new blog (Blog started in 2014), and
  • Best independent blogger (Can’t work for VMware or a hardware/software vendor)

As always all blogs that are listed on the vLaunchPad are included in the general voting. I don’t have a goal for the voting, but a place between #49 and #100 would be nice. ;)

Some short sentences about vcloudnine.de:

vcloudnine.de is the personal blog of Patrick Terlisten. The site has a strong focus on virtualization, storage, networking and IT infrastructure in general. The main driver of this blog is to share knowledge and write about topics, that I think is worth mentioning. The views expressed anywhere on this site are mine and not the opinions and views of my employer or a vendor.

The predominating topics on vcloudnine.de are VMware, HP Storage, HP Data Protector, networking in general and Microsoft Exchange.

Andreas Lesslhumer (@lessi001running-system.com) has created a nice statistic for 2014: Virtualization blogs 2014 by numbers. The statistic is based on the blogs, that are listed on the vLaunchPad. vcloudnine.de was one of the 28 blogs, that published more than 100 blog posts in 2014. In 2015 I published 13 blog posts so far. But to be honest: It’s not about the number of posts you publish – the content matters! So if you vote for a blog, vote for the content, not the number of published posts or the author.

Check out the Top vBlog 2015 landing page and don’t forget to vote for your favorite blogs! The voting will start soon!