<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>
	Comments on: DragonFly on a VPS	</title>
	<atom:link href="https://www.dragonflydigest.com/2015/01/07/dragonfly-on-a-vps/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.dragonflydigest.com/2015/01/07/dragonfly-on-a-vps/</link>
	<description>A running description of activity related to DragonFly BSD.</description>
	<lastBuildDate>Fri, 09 Jan 2015 20:43:42 +0000</lastBuildDate>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>
		By: Anonymous		</title>
		<link>https://www.dragonflydigest.com/2015/01/07/dragonfly-on-a-vps/comment-page-1/#comment-325883</link>

		<dc:creator><![CDATA[Anonymous]]></dc:creator>
		<pubDate>Fri, 09 Jan 2015 20:43:42 +0000</pubDate>
		<guid isPermaLink="false">http://www.dragonflydigest.com/?p=15367#comment-325883</guid>

					<description><![CDATA[For the record, I use virtio-net in all my virtual dfly instances, and it works amazingly well.]]></description>
			<content:encoded><![CDATA[<p>For the record, I use virtio-net in all my virtual dfly instances, and it works amazingly well.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Venkatesh Srinivas		</title>
		<link>https://www.dragonflydigest.com/2015/01/07/dragonfly-on-a-vps/comment-page-1/#comment-325685</link>

		<dc:creator><![CDATA[Venkatesh Srinivas]]></dc:creator>
		<pubDate>Thu, 08 Jan 2015 20:12:48 +0000</pubDate>
		<guid isPermaLink="false">http://www.dragonflydigest.com/?p=15367#comment-325685</guid>

					<description><![CDATA[On the virtio front, bits of things that should be done for better performance:

[virtio-pci]: The interrupt setup code should be reworked, to remove legacy MSI support (nonstandard for virtio) and enable MSI-X where possible; this should slightly improve interrupt performance by itself, should help a lot more when virtio-net support multiple RX/TX queues.

[virtqueue/virtio core]: Restore indirect ring support -- QEMU&#039;s virtio-blk has a default 128-entry ring; virtio-blk can blow out that ring readily without indirect ring support. virtio-blk is often performance limited by this (in bandwidth).

[virtio-blk]: Indirect ring entries; see above.

[virtio-blk]: We investigated &quot;pacing&quot; a while back -- while the isr is running on an ithread, the front end (dispatch) code can queue directly to it, rather than generating a VQ kick. It changed performance, mostly for the better when we played with it.

[virtio-blk]: Investigate why DFly doesn&#039;t (didn&#039;t?) run on qemu when virtio-blk dataplane is enabled. It seemed to stall somewhere in the bootloader.

[virtio-net]: Multiple TX/RX queues, one per CPU; DragonFly could make excellent use of this. One serializer per-queue. Looking at the differences between the if_em and if_emx driver would show the diffs required, roughly.

Other stuff for running on KVM:
PV-EOI -- paravirt EOI should reduce the cost of ACKing a virtual interrupt, specifically the number of intercepts.

kvmclock: Not sure what the default DFly timer is on KVM these days; kvmclock may be a better option than the ACPI-safe timer, the CMOS RTC, and the i8254.

Dragonfly&#039;s idle thread tries to use MONITOR/MWAIT for a bit before switching to HLT (and then deeper). In workloads with rapid halt/activate cycles, the early monitor/mwait helped a lot, but it is not available in KVM. Some other fast halt may be worth looking into, before dropping to a full HLT. (HLT may be fairly expensive in a VM).]]></description>
			<content:encoded><![CDATA[<p>On the virtio front, bits of things that should be done for better performance:</p>
<p>[virtio-pci]: The interrupt setup code should be reworked, to remove legacy MSI support (nonstandard for virtio) and enable MSI-X where possible; this should slightly improve interrupt performance by itself, should help a lot more when virtio-net support multiple RX/TX queues.</p>
<p>[virtqueue/virtio core]: Restore indirect ring support &#8212; QEMU&#8217;s virtio-blk has a default 128-entry ring; virtio-blk can blow out that ring readily without indirect ring support. virtio-blk is often performance limited by this (in bandwidth).</p>
<p>[virtio-blk]: Indirect ring entries; see above.</p>
<p>[virtio-blk]: We investigated &#8220;pacing&#8221; a while back &#8212; while the isr is running on an ithread, the front end (dispatch) code can queue directly to it, rather than generating a VQ kick. It changed performance, mostly for the better when we played with it.</p>
<p>[virtio-blk]: Investigate why DFly doesn&#8217;t (didn&#8217;t?) run on qemu when virtio-blk dataplane is enabled. It seemed to stall somewhere in the bootloader.</p>
<p>[virtio-net]: Multiple TX/RX queues, one per CPU; DragonFly could make excellent use of this. One serializer per-queue. Looking at the differences between the if_em and if_emx driver would show the diffs required, roughly.</p>
<p>Other stuff for running on KVM:<br />
PV-EOI &#8212; paravirt EOI should reduce the cost of ACKing a virtual interrupt, specifically the number of intercepts.</p>
<p>kvmclock: Not sure what the default DFly timer is on KVM these days; kvmclock may be a better option than the ACPI-safe timer, the CMOS RTC, and the i8254.</p>
<p>Dragonfly&#8217;s idle thread tries to use MONITOR/MWAIT for a bit before switching to HLT (and then deeper). In workloads with rapid halt/activate cycles, the early monitor/mwait helped a lot, but it is not available in KVM. Some other fast halt may be worth looking into, before dropping to a full HLT. (HLT may be fairly expensive in a VM).</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Mike		</title>
		<link>https://www.dragonflydigest.com/2015/01/07/dragonfly-on-a-vps/comment-page-1/#comment-325665</link>

		<dc:creator><![CDATA[Mike]]></dc:creator>
		<pubDate>Thu, 08 Jan 2015 18:15:42 +0000</pubDate>
		<guid isPermaLink="false">http://www.dragonflydigest.com/?p=15367#comment-325665</guid>

					<description><![CDATA[Speed - compared to linux.]]></description>
			<content:encoded><![CDATA[<p>Speed &#8211; compared to linux.</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Sascha Wildner		</title>
		<link>https://www.dragonflydigest.com/2015/01/07/dragonfly-on-a-vps/comment-page-1/#comment-325627</link>

		<dc:creator><![CDATA[Sascha Wildner]]></dc:creator>
		<pubDate>Thu, 08 Jan 2015 10:13:44 +0000</pubDate>
		<guid isPermaLink="false">http://www.dragonflydigest.com/?p=15367#comment-325627</guid>

					<description><![CDATA[What&#039;s missing from our virtio-pci?]]></description>
			<content:encoded><![CDATA[<p>What&#8217;s missing from our virtio-pci?</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Mike		</title>
		<link>https://www.dragonflydigest.com/2015/01/07/dragonfly-on-a-vps/comment-page-1/#comment-325622</link>

		<dc:creator><![CDATA[Mike]]></dc:creator>
		<pubDate>Thu, 08 Jan 2015 09:56:14 +0000</pubDate>
		<guid isPermaLink="false">http://www.dragonflydigest.com/?p=15367#comment-325622</guid>

					<description><![CDATA[Are there any news about the virtio-pci driver?]]></description>
			<content:encoded><![CDATA[<p>Are there any news about the virtio-pci driver?</p>
]]></content:encoded>
		
			</item>
		<item>
		<title>
		By: Siju		</title>
		<link>https://www.dragonflydigest.com/2015/01/07/dragonfly-on-a-vps/comment-page-1/#comment-325593</link>

		<dc:creator><![CDATA[Siju]]></dc:creator>
		<pubDate>Thu, 08 Jan 2015 04:55:10 +0000</pubDate>
		<guid isPermaLink="false">http://www.dragonflydigest.com/?p=15367#comment-325593</guid>

					<description><![CDATA[This is wonderful news and what I was waiting for!
Wonder if the have a free tier where we could test the feasibility of mirroring from local to cloud VPS]]></description>
			<content:encoded><![CDATA[<p>This is wonderful news and what I was waiting for!<br />
Wonder if the have a free tier where we could test the feasibility of mirroring from local to cloud VPS</p>
]]></content:encoded>
		
			</item>
	</channel>
</rss>
