Skip to content

Instantly share code, notes, and snippets.

@retrage
Created January 7, 2026 01:55
Show Gist options
  • Select an option

  • Save retrage/70336b8d276b933563b234cd785f411d to your computer and use it in GitHub Desktop.

Select an option

Save retrage/70336b8d276b933563b234cd785f411d to your computer and use it in GitHub Desktop.
LLM Prompt to find vulnerabilities in BitVisor's virtio-net implementation
You are an expert at finding and exploiting security vulnerabilities. Your speciality is finding vulnerabilities in the
hypervisor. You will be provided with C source code. You will read the code carefully and look for typical memory bugs that
lead to critical vulnerabilities.
You are very careful to avoid reporting false positives. To avoid reporting false positives you carefully check your
reasoning before submitting a vulnerability report. You write down a detailed, step by step, description of the code
paths from the entry points in the code up to the point where the vulnerability occurs. You then go through every
conditional statement on that code path and figure out concretely how an attacker ensures that it has the correct
outcome. Finally, you check that there are no contradictions in your reasoning and no assumptions. This ensures you
never report a false positive. If after performing your checks you realize that your initial report of a vulnerability
was a false positive then you tell the user that it is a false positive, and why.
When you are asked to check for vulnerabilities you may be provided with all of the relevant source code, or there may
be some missing functions and types. If there are missing functions or types and they are critical to understanding the
code or a vulnerability then you ask for their definitions rather than making unfounded assumptions. If there are
missing functions or types but they are part of the Linux Kernel's API then you may assume they have their common
definition. Only do this if you are confident you know exactly what that definition is. If not, ask for the definitions.
DO NOT report hypothetical vulnerabilities. You must be able to cite all of the code involved in the vulnerability, and
show exactly (using code examples and a walkthrough) how the vulnerability occurs. It is better to report no
vulnerabilities than to report false positives or hypotheticals.
Audit the code for security vulnerabilities. Remember to check all of your reasoning. Avoid reporting false positives.
It is better to say that you cannot find any vulnerabilities than to report a false positive.
The code is for the thin type-1 hypervisor's VirtIO network para-virtualized device implementation. There are two components:
- The VirtIO network device component which works as frontend device for the untrusted guest.
- The physical network device driver backend component which handles requests from the physical device. It processes untrusted packets from external devices.
The backend device driver propagates the processed packets to the frontend VirtIO network device. Assume that the incoming packets can be malicious, and the guest is untrusted, so the packets from the guest can be also malicious.
Attackers can have two attack surfaces: the one is from external devices, and the other is from the local guest.
<documents>
<document index="1">
<source>./drivers/net/virtio_net.c</source>
<document_content>
/*
* Copyright (c) 2007, 2008 University of Tsukuba
* Copyright (c) 2015 Igel Co., Ltd
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
* 3. Neither the name of the University of Tsukuba nor the names of its
* contributors may be used to endorse or promote products derived from
* this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
#include <core.h>
#include <core/dres.h>
#include <core/mm.h>
#include <core/panic.h>
#include <core/time.h>
#include <net/netapi.h>
#include <pci.h>
#include "virtio_net.h"
#define OFFSET_TO_DWORD_BLOCK(offset) ((offset) / sizeof (u32))
#define VIRTIO_N_QUEUES 3
#define VIRTIO_NET_PKT_BATCH 16
#define VIRTIO_NET_QUEUE_SIZE 256
#define VIRTIO_NET_MSIX_N_VECTORS 4
#define VIRTIO_NET_MSIX_TAB_LEN (VIRTIO_NET_MSIX_N_VECTORS * 16)
#define VIRTIO_NET_MSIX_PBA_LEN (VIRTIO_NET_MSIX_N_VECTORS * 8)
struct virtio_pci_regs32 {
u32 initial_val;
u32 mask;
};
struct virtio_ext_cap {
u8 offset;
bool replace_next;
u8 new_next;
};
/* Device Status bits */
#define VIRTIO_STATUS_DRIVER_OK 0x4
#define VIRTIO_STATUS_FEATURES_OK 0x8
/* Feature bits */
#define VIRTIO_NET_F_MAC (1ULL << 5)
#define VIRTIO_NET_F_CTRL_VQ (1ULL << 17)
#define VIRTIO_NET_F_CTRL_RX (1ULL << 18)
#define VIRTIO_F_VERSION_1 (1ULL << 32)
#define VIRTIO_F_ACCESS_PLATFORM (1ULL << 33)
#define VIRTIO_NET_DEVICE_FEATURES (VIRTIO_NET_F_MAC | VIRTIO_NET_F_CTRL_VQ | \
VIRTIO_NET_F_CTRL_RX | \
VIRTIO_F_VERSION_1 | \
VIRTIO_F_ACCESS_PLATFORM)
/* For ctrl commands */
#define VIRTIO_NET_ACK_OK 0
#define VIRTIO_NET_ACK_ERR 1
/* Ctrl command class */
#define VIRTIO_NET_CTRL_RX 0
#define VIRTIO_NET_CTRL_MAC 1
/* Ctrl command code for VIRTIO_NET_CTRL_RX */
#define VIRTIO_NET_CTRL_RX_PROMISC 0
#define VIRTIO_NET_CTRL_RX_ALLMULTI 1
#define VIRTIO_NET_CTRL_MAC_TABLE_MAX_ENTRIES 16
/* Ctrl command code for VIRTIO_NET_CTRL_MAC */
#define VIRTIO_NET_CTRL_MAC_TABLE_SET 0
#define VIRTIO_NET_CTRL_MAC_ADDR_SET 1
#define VIRTIO_PCI_CAP_COMMON_CFG 1
#define VIRTIO_PCI_CAP_NOTIFY_CFG 2
#define VIRTIO_PCI_CAP_ISR_CFG 3
#define VIRTIO_PCI_CAP_DEVICE_CFG 4
#define VIRTIO_PCI_CAP_PCI_CFG 5
struct virtio_pci_cap {
u8 cap_vndr;
u8 cap_next;
u8 cap_len;
u8 cfg_type;
u8 bar;
u8 padding[3];
u32 offset;
u32 length;
};
struct virtio_pci_cfg_cap {
struct virtio_pci_cap cap;
u8 pci_cfg_data[4];
};
#define VIRTIO_MSIX_CAP_OFFSET 0x40
#define VIRTIO_COMMON_CFG_CAP_OFFSET 0x50
#define VIRTIO_ISR_CFG_CAP_OFFSET 0x60
#define VIRTIO_DEV_CFG_CAP_OFFSET 0x70
#define VIRTIO_NOTIFY_CFG_CAP_OFFSET 0x80
#define VIRTIO_PCI_CFG_CAP_OFFSET 0x94
#define VIRTIO_EXT_CAP_OFFSET 0xA8
#define VIRTIO_EXT_CAP_DWORD_BLOCK (VIRTIO_EXT_CAP_OFFSET / sizeof (u32))
#define VIRTIO_EXT_REGS32_NUM \
(PCI_CONFIG_REGS32_NUM - VIRTIO_EXT_CAP_DWORD_BLOCK)
#define VIRTIO_PCI_CAPLEN (sizeof (struct virtio_pci_cap))
#define VIRTIO_CAP_1ST_DWORD(next, extra_cap_len, type) \
(PCI_CAP_VENDOR | ((next) & 0xFF) << 8 | \
((VIRTIO_PCI_CAPLEN + (extra_cap_len)) & 0xFF) << 16 | \
((type) & 0xFF) << 24)
#define VIRTIO_MMIO_BAR 2
#define VIRTIO_CFG_SIZE 0x100
#define VIRTIO_COMMON_CFG_OFFSET 0x0
#define VIRTIO_NOTIFY_CFG_OFFSET (VIRTIO_COMMON_CFG_OFFSET + VIRTIO_CFG_SIZE)
#define VIRTIO_ISR_CFG_OFFSET (VIRTIO_NOTIFY_CFG_OFFSET + VIRTIO_CFG_SIZE)
#define VIRTIO_DEV_CFG_OFFSET (VIRTIO_ISR_CFG_OFFSET + VIRTIO_CFG_SIZE)
#define VIRTIO_MSIX_OFFSET (VIRTIO_DEV_CFG_OFFSET + VIRTIO_CFG_SIZE)
#define VIRTIO_PBA_OFFSET (VIRTIO_MSIX_OFFSET + VIRTIO_NET_MSIX_TAB_LEN)
#define VIRTIO_COMMON_CFG_LEN 56
#define VIRTIO_NOTIFY_CFG_LEN 2
#define VIRTIO_ISR_CFG_LEN 4
#define VIRTIO_DEV_CFG_LEN 64 /* 64 is a workaround for macOS's driver */
struct virtio_net_hdr {
#define VIRTIO_NET_HDR_F_NEEDS_CSUM 1
#define VIRTIO_NET_HDR_F_DATA_VALID 2
#define VIRTIO_NET_HDR_F_RSC_INFO 4
u8 flags;
#define VIRTIO_NET_HDR_GSO_NONE 0
#define VIRTIO_NET_HDR_GSO_TCPV4 1
#define VIRTIO_NET_HDR_GSO_UDP 3
#define VIRTIO_NET_HDR_GSO_TCPV6 4
#define VIRTIO_NET_HDR_GSO_ECN 0x80
u8 gso_type;
u16 hdr_len;
u16 gso_size;
u16 csum_start;
u16 csum_offset;
u16 num_buffers;
};
struct virtio_net {
u32 prev_port;
u32 port;
u32 cmd;
u32 queue[VIRTIO_N_QUEUES];
u64 desc[VIRTIO_N_QUEUES];
u64 avail[VIRTIO_N_QUEUES];
u64 used[VIRTIO_N_QUEUES];
u32 mmio_base;
u32 mmio_len;
u64 device_feature;
u64 driver_feature;
u32 device_feature_select;
u32 driver_feature_select;
struct dres_reg *r_mm;
struct dres_reg *r_io;
void *mmio_param;
void (*mmio_change) (void *mmio_param, struct pci_bar_info *bar_info,
struct dres_reg *new_r);
bool mmio_base_emul;
bool mmio_base_emul_1;
bool v1;
bool v1_legacy; /* Workaround for non-compliant v1 driver */
bool ready;
u8 *macaddr;
struct pci_device *dev;
const struct mm_as *as_dma;
net_recv_callback_t *recv_func;
void *recv_param;
void (*intr_clear) (void *intr_param);
void (*intr_set) (void *intr_param);
void (*intr_disable) (void *intr_param);
void (*intr_enable) (void *intr_param);
void *intr_param;
u64 last_time;
u8 dev_status;
u16 selected_queue;
u16 queue_size[VIRTIO_N_QUEUES];
bool queue_enable[VIRTIO_N_QUEUES];
int multifunction;
bool intr_suppress;
i8 intr_suppress_running;
spinlock_t intr_suppress_lock;
bool intr_enabled;
bool intr;
bool msix;
u16 msix_cfgvec;
u16 msix_quevec[VIRTIO_N_QUEUES];
bool msix_enabled;
bool msix_mask;
void (*msix_disable) (void *msix_param);
void (*msix_enable) (void *msix_param);
void (*msix_vector_change) (void *msix_param, unsigned int queue,
int vector);
void (*msix_generate) (void *msix_param, unsigned int queue);
void (*msix_mmio_update) (void *msix_param);
void *msix_param;
struct virtio_ext_cap ext_caps[VIRTIO_EXT_REGS32_NUM];
u16 next_ext_cap;
u16 next_ext_cap_offset;
bool pcie_cap;
struct virtio_pci_cfg_cap pci_cfg;
struct msix_table msix_table_entry[VIRTIO_N_QUEUES];
u8 buf[VIRTIO_NET_PKT_BATCH][2048];
spinlock_t msix_lock;
u8 unicast_filter[VIRTIO_NET_CTRL_MAC_TABLE_MAX_ENTRIES][6];
u8 multicast_filter[VIRTIO_NET_CTRL_MAC_TABLE_MAX_ENTRIES][6];
u32 unicast_filter_entries;
u32 multicast_filter_entries;
bool allow_multicast;
bool allow_promisc;
};
struct vr_desc {
u64 addr;
u32 len;
#define VIRTQ_DESC_F_NEXT 1
#define VIRTQ_DESC_F_WRITE 2
#define VIRTQ_DESC_F_INDIRECT 4
u32 flags_next; /* lower is flags, upper is next */
};
struct vr_avail {
#define VIRTQ_AVAIL_F_NO_INTERRUPT 1
u16 flags;
u16 idx;
u16 ring[VIRTIO_NET_QUEUE_SIZE];
};
struct vr_used {
#define VIRTQ_USED_F_NO_NOTIFY 1
u16 flags;
u16 idx;
struct {
u32 id;
u32 len;
} ring[VIRTIO_NET_QUEUE_SIZE];
};
#define DESC_SIZE (sizeof (struct vr_desc) * VIRTIO_NET_QUEUE_SIZE)
#define AVAIL_SIZE (sizeof (struct vr_avail))
#define DA_N_PAGES ((DESC_SIZE + AVAIL_SIZE + (PAGESIZE - 1)) / PAGESIZE)
#define PADDING_SIZE (DA_N_PAGES * PAGESIZE - (DESC_SIZE + AVAIL_SIZE))
#define AVAIL_MAP_SIZE(queue_size) (2 + 2 + 2 * (queue_size))
#define USED_MAP_SIZE(queue_size) (2 + 2 + 8 * (queue_size))
struct virtio_ring {
struct vr_desc desc[VIRTIO_NET_QUEUE_SIZE];
struct vr_avail avail;
u8 padding[PADDING_SIZE];
struct vr_used used;
} __attribute__ ((packed));
struct handle_io_data {
u32 size;
void (*handler) (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info);
const void *extra_info;
};
typedef void (*v1_handler_t) (struct virtio_net *vnet, bool wr, u32 iosize,
u32 mmio_offset, union mem *data);
static struct handle_io_data vnet_pci_data[PCI_CONFIG_REGS32_NUM + 1];
static const struct virtio_pci_regs32 vnet_pci_initial_val[] = {
/* PCI Common config area */
{ 0x10001AF4, 0x00000000 }, { 0x00100000, 0xFFFF0000 },
{ 0x02000000, 0xFFFFFF00 }, { 0x00000000, 0xFF00FFFF },
{ 0x00000000, 0x00000000 }, { 0x00000000, 0x00000000 },
{ 0x00000000, 0x00000000 }, { 0x00000000, 0x00000000 },
{ 0x00000000, 0x00000000 }, { 0x00000000, 0x00000000 },
{ 0x00000000, 0xFFFFFFFF }, { 0x00011AF4, 0x00000000 },
{ 0x00000000, 0xFFFFFFFF }, { 0x00000000, 0x00000000 },
{ 0x00000000, 0x00000000 }, { 0x000001FF, 0xFFFFFFFF },
/* MSI-X config area */
{ 0x00000000 }, { VIRTIO_MMIO_BAR + VIRTIO_MSIX_OFFSET },
{ VIRTIO_MMIO_BAR + VIRTIO_PBA_OFFSET }, { 0x00000000 },
/* VIRTIO_PCI_CAP_COMMON config area */
{ VIRTIO_CAP_1ST_DWORD (VIRTIO_NOTIFY_CFG_CAP_OFFSET, 0,
VIRTIO_PCI_CAP_COMMON_CFG) },
{ VIRTIO_MMIO_BAR }, { VIRTIO_COMMON_CFG_OFFSET },
{ VIRTIO_COMMON_CFG_LEN },
/* VIRTIO_PCI_CAP_ISR config area */
{ VIRTIO_CAP_1ST_DWORD (VIRTIO_DEV_CFG_CAP_OFFSET, 0,
VIRTIO_PCI_CAP_ISR_CFG) },
{ VIRTIO_MMIO_BAR }, { VIRTIO_ISR_CFG_OFFSET },
{ VIRTIO_ISR_CFG_LEN },
/* VIRTIO_PCI_CAP_DEVICE config area */
{ VIRTIO_CAP_1ST_DWORD (VIRTIO_PCI_CFG_CAP_OFFSET, 0,
VIRTIO_PCI_CAP_DEVICE_CFG) },
{ VIRTIO_MMIO_BAR }, { VIRTIO_DEV_CFG_OFFSET },
{ VIRTIO_DEV_CFG_LEN },
/* VIRTIO_PCI_CAP_NOTIFY config area */
{ VIRTIO_CAP_1ST_DWORD (VIRTIO_ISR_CFG_CAP_OFFSET, 4,
VIRTIO_PCI_CAP_NOTIFY_CFG) },
{ VIRTIO_MMIO_BAR }, { VIRTIO_NOTIFY_CFG_OFFSET },
{ VIRTIO_NOTIFY_CFG_LEN },
{ 0x00000000 },
/* VIRTIO_PCI_CAP_PCI config area */
{ 0x00000000 }, { 0x00000000 }, { 0x00000000 },
{ 0x00000000 }, { 0x00000000 },
/* The rest */
{ 0x00000000 }, { 0x00000000 },
{ 0x00000000 }, { 0x00000000 }, { 0x00000000 }, { 0x00000000 },
{ 0x00000000 }, { 0x00000000 }, { 0x00000000 }, { 0x00000000 },
{ 0x00000000 }, { 0x00000000 }, { 0x00000000 }, { 0x00000000 },
{ 0x00000000 }, { 0x00000000 }, { 0x00000000 }, { 0x00000000 },
{ 0x00000000 }, { 0x00000000 }, { 0x00000000 }, { 0x00000000 },
};
static void
handle_io_with_default (struct virtio_net *vnet, bool wr, u32 iosize,
u32 offset, void *buf, const struct handle_io_data *d,
void (*default_func) (struct virtio_net *vnet, bool wr,
u32 iosize, u32 offset,
void *buf))
{
union mem tmp;
if (!wr)
memset (buf, 0, iosize);
/* Firstly, seek to the first handler */
while (d->size && offset >= d->size) {
offset -= d->size;
d++;
}
/* Deal with unaligned access first */
if (d->size && offset > 0) {
d->handler (vnet, false, &tmp, d->extra_info);
void *p = &tmp.byte + offset;
u32 s = d->size - offset;
if (s > iosize)
s = iosize;
if (wr) {
memcpy (p, buf, s);
d->handler (vnet, true, &tmp, d->extra_info);
} else {
memcpy (buf, p, s);
}
if (s == iosize)
return;
buf += s;
iosize -= s;
offset = 0;
d++;
}
/* From this point onward, all accesses are aligned */
while (d->size && iosize >= d->size) {
d->handler (vnet, wr, buf, d->extra_info);
buf += d->size;
iosize -= d->size;
offset -= d->size;
d++;
}
/* Deal with partial accesses */
if (d->size && iosize) {
d->handler (vnet, false, &tmp, d->extra_info);
if (wr) {
memcpy (&tmp, buf, iosize);
d->handler (vnet, true, &tmp, d->extra_info);
} else {
memcpy (buf, &tmp, iosize);
}
return;
}
if (iosize && default_func)
default_func (vnet, wr, iosize, offset, buf);
}
static void
handle_io (struct virtio_net *vnet, bool wr, u32 iosize, u32 offset, void *buf,
const struct handle_io_data *d)
{
handle_io_with_default (vnet, wr, iosize, offset, buf, d, NULL);
}
static void
virtio_net_reset_ctrl_mac (struct virtio_net *vnet)
{
memset (vnet->unicast_filter, 0, sizeof vnet->unicast_filter);
memset (vnet->multicast_filter, 0, sizeof vnet->multicast_filter);
vnet->unicast_filter_entries = 0;
vnet->multicast_filter_entries = 0;
}
static void
virtio_net_reset_dev (struct virtio_net *vnet)
{
uint i;
for (i = 0; i < VIRTIO_N_QUEUES; i++) {
vnet->queue[i] = 0;
vnet->desc[i] = 0;
vnet->avail[i] = 0;
vnet->used[i] = 0;
vnet->queue_size[i] = VIRTIO_NET_QUEUE_SIZE;
vnet->queue_enable[i] = false;
}
vnet->driver_feature = 0;
vnet->device_feature_select = 0;
vnet->driver_feature_select = 0;
vnet->v1 = false;
vnet->v1_legacy = false;
vnet->ready = false;
vnet->dev_status = 0;
vnet->selected_queue = 0;
virtio_net_reset_ctrl_mac (vnet);
vnet->allow_multicast = false;
vnet->allow_promisc = false;
}
static uint
virtio_net_hdr_size (bool legacy)
{
/* In legacy mode, 16bit field "num_buffers" is not
* presented. */
return legacy ? sizeof (struct virtio_net_hdr) - 2 :
sizeof (struct virtio_net_hdr);
}
static void
virtio_net_get_nic_info (void *handle, struct nicinfo *info)
{
struct virtio_net *vnet = handle;
info->mtu = 1500;
info->media_speed = 1000000000;
memcpy (info->mac_address, vnet->macaddr, 6);
}
static void
virtio_net_enable_interrupt (struct virtio_net *vnet)
{
vnet->intr_suppress = false;
vnet->intr_enable (vnet->intr_param);
vnet->intr_enabled = true;
printf ("virtio_net: enable interrupt\n");
}
static void
virtio_net_disable_interrupt (struct virtio_net *vnet)
{
vnet->intr_suppress = false;
vnet->intr_enabled = false;
vnet->intr_disable (vnet->intr_param);
printf ("virtio_net: disable interrupt\n");
}
static void
virtio_net_suppress_interrupt (struct virtio_net *vnet, bool yes)
{
if (vnet->msix_enabled)
return;
if (vnet->intr_suppress == yes)
return;
/* intr_suppress_running:
* == 0: this function is not running in background
* != 0: this function is running in background
* ==-1: intr_enable will be called in background
*
* background foreground
* running !yes setting !yes: background continues
* running !yes setting yes: background continues
* running yes setting !yes: background will call intr_enable
* running yes setting yes: background continues
*/
if (!yes) {
int skip;
spinlock_lock (&vnet->intr_suppress_lock);
if (!vnet->intr_suppress_running) {
conflict:
skip = 0;
vnet->intr_suppress_running = 1;
vnet->intr_suppress = false;
} else {
skip = 1;
vnet->intr_suppress_running = -1;
}
spinlock_unlock (&vnet->intr_suppress_lock);
if (!skip) {
if (vnet->intr_enabled)
vnet->intr_enable (vnet->intr_param);
spinlock_lock (&vnet->intr_suppress_lock);
vnet->intr_suppress_running = 0;
spinlock_unlock (&vnet->intr_suppress_lock);
}
}
if (yes && vnet->intr_enabled) {
int skip = 1;
spinlock_lock (&vnet->intr_suppress_lock);
if (!vnet->intr_suppress_running) {
skip = 0;
vnet->intr_suppress_running = 1;
vnet->intr_suppress = true;
}
spinlock_unlock (&vnet->intr_suppress_lock);
if (!skip) {
vnet->intr_disable (vnet->intr_param);
spinlock_lock (&vnet->intr_suppress_lock);
if (vnet->intr_suppress_running < 0)
goto conflict;
vnet->intr_suppress_running = 0;
spinlock_unlock (&vnet->intr_suppress_lock);
}
}
}
static void
virtio_net_trigger_interrupt (struct virtio_net *vnet, unsigned int queue)
{
if (vnet->msix_enabled) {
virtio_net_suppress_interrupt (vnet, false);
spinlock_lock (&vnet->msix_lock);
if (!vnet->msix_mask)
vnet->msix_generate (vnet->msix_param, queue);
spinlock_unlock (&vnet->msix_lock);
} else {
vnet->intr = true;
virtio_net_suppress_interrupt (vnet, false);
vnet->intr_set (vnet->intr_param);
}
}
static bool
virtio_net_untrigger_interrupt (struct virtio_net *vnet)
{
vnet->intr_clear (vnet->intr_param);
if (vnet->intr) {
vnet->intr = false;
return true;
}
return false;
}
static void
do_net_send (struct virtio_net *vnet, struct vr_desc *desc,
struct vr_avail *avail, struct vr_used *used, bool legacy_hdr,
unsigned int num_packets, void **packets,
unsigned int *packet_sizes, bool print_ok)
{
u16 idx_a, idx_u, ring;
u32 len, desc_len, i, j;
u32 ring_tmp, d;
u8 *buf_ring;
u8 *buf;
int buflen;
bool intr = false;
uint desc_hdr_len = virtio_net_hdr_size (legacy_hdr);
loop:
if (!num_packets--)
goto ret;
buf = *packets++;
buflen = *packet_sizes++;
idx_a = avail->idx;
idx_u = used->idx;
if (idx_a == idx_u) {
u64 now = get_time ();
if (now - vnet->last_time >= 1000000 && print_ok &&
used->flags)
printf ("%s: Receive ring buffer full\n", __func__);
/* Do not suppress interrupts until the guest reads
* ISR status. */
if (intr || vnet->intr)
goto ret;
/* Suppress interrupts. While the used->flags is
* cleared, the guest sends a notification when
* updating available ring index. */
used->flags = 0;
virtio_net_suppress_interrupt (vnet, true);
if (vnet->intr) {
/* In case of conflicting with
* virtio_net_trigger_interrupt(). */
virtio_net_suppress_interrupt (vnet, false);
goto ret;
}
vnet->last_time = now;
/* Check available ring index again to avoid race
* condition. */
idx_a = avail->idx;
if (idx_a == idx_u)
goto ret;
}
used->flags = VIRTQ_USED_F_NO_NOTIFY;
virtio_net_suppress_interrupt (vnet, false);
idx_u %= vnet->queue_size[0];
ring = avail->ring[idx_u];
ring_tmp = ((u32)ring << 16) | 1;
len = 0;
while (ring_tmp & 1) {
ring_tmp >>= 16;
d = ring_tmp % vnet->queue_size[0];
desc_len = desc[d].len;
buf_ring = mapmem_as (vnet->as_dma, desc[d].addr, desc_len,
MAPMEM_WRITE);
i = 0;
if (len < desc_hdr_len) {
i = desc_hdr_len - len;
if (i > desc_len)
i = desc_len;
if (i == desc_hdr_len) {
/* Fast path */
memset (buf_ring, 0, i);
if (!legacy_hdr) {
struct virtio_net_hdr *h;
h = (struct virtio_net_hdr *)buf_ring;
h->num_buffers = 1;
}
} else {
/* Slow path */
union {
u8 b[sizeof (struct virtio_net_hdr)];
struct virtio_net_hdr s;
} h = {
.s = {
.num_buffers = 1,
}
};
memcpy (buf_ring, &h.b[len], i);
}
len += i;
}
if (len >= desc_hdr_len && i < desc_len) {
j = buflen - (len - desc_hdr_len);
if (j > desc_len - i)
j = desc_len - i;
memcpy (&buf_ring[i], &buf[len - desc_hdr_len], j);
len += j;
}
unmapmem (buf_ring, desc_len);
ring_tmp = desc[d].flags_next;
}
if (0)
printf ("Receive %u bytes %02X:%02X:%02X:%02X:%02X:%02X"
" <- %02X:%02X:%02X:%02X:%02X:%02X\n", buflen,
buf[0], buf[1], buf[2], buf[3], buf[4], buf[5],
buf[6], buf[7], buf[8], buf[9], buf[10], buf[11]);
used->ring[idx_u].id = ring;
used->ring[idx_u].len = len;
asm volatile ("" : : : "memory");
used->idx++;
intr = true;
goto loop;
ret:
if (avail->flags & VIRTQ_AVAIL_F_NO_INTERRUPT) /* No interrupt */
intr = false;
if (intr)
virtio_net_trigger_interrupt (vnet, 0);
}
/* Send to guest */
static void
virtio_net_send (void *handle, unsigned int num_packets, void **packets,
unsigned int *packet_sizes, bool print_ok)
{
struct virtio_net *vnet = handle;
struct virtio_ring *p;
struct vr_desc *desc;
struct vr_avail *avail;
struct vr_used *used;
uint queue_size;
if (!vnet->ready)
return;
if (vnet->v1) {
if (!vnet->queue_enable[0])
return;
queue_size = vnet->queue_size[0];
desc = mapmem_as (vnet->as_dma, vnet->desc[0],
sizeof *desc * queue_size, MAPMEM_WRITE);
avail = mapmem_as (vnet->as_dma, vnet->avail[0],
AVAIL_MAP_SIZE (queue_size), MAPMEM_WRITE);
used = mapmem_as (vnet->as_dma, vnet->used[0],
USED_MAP_SIZE (queue_size), MAPMEM_WRITE);
do_net_send (vnet, desc, avail, used, vnet->v1_legacy,
num_packets, packets, packet_sizes, print_ok);
unmapmem (used, USED_MAP_SIZE (queue_size));
unmapmem (avail, AVAIL_MAP_SIZE (queue_size));
unmapmem (desc, sizeof *desc * queue_size);
} else {
p = mapmem_as (vnet->as_dma, (u64)vnet->queue[0] << 12,
sizeof *p, MAPMEM_WRITE);
do_net_send (vnet, p->desc, &p->avail, &p->used, true,
num_packets, packets, packet_sizes, print_ok);
unmapmem (p, sizeof *p);
}
}
static void
do_net_recv (struct virtio_net *vnet, struct vr_desc *desc,
struct vr_avail *avail, struct vr_used *used, bool legacy_hdr)
{
u16 idx_a, idx_u, ring;
u32 len, desc_len, count = 0, pkt_sizes[VIRTIO_NET_PKT_BATCH];
u32 ring_tmp, d;
u8 *buf, *buf_ring;
void *pkts[VIRTIO_NET_PKT_BATCH];
bool intr = false;
uint desc_hdr_len = virtio_net_hdr_size (legacy_hdr);
idx_a = avail->idx;
while (idx_a != used->idx) {
idx_u = used->idx % vnet->queue_size[1];
ring = avail->ring[idx_u];
ring_tmp = ((u32)ring << 16) | 1;
len = 0;
buf = vnet->buf[count];
while (ring_tmp & 1) {
ring_tmp >>= 16;
d = ring_tmp % vnet->queue_size[1];
desc_len = desc[d].len;
/* Detect unsupported MTU setting or corrupted
* case like 0xFFFFFFFF. */
if (desc_len > sizeof vnet->buf[count] - len) {
len = 0;
break;
}
buf_ring = mapmem_as (vnet->as_dma, desc[d].addr,
desc_len, 0);
memcpy (&buf[len], buf_ring, desc_len);
unmapmem (buf_ring, desc_len);
len += desc_len;
ring_tmp = desc[d].flags_next;
}
#if 0
printf ("Send %u bytes %02X:%02X:%02X:%02X:%02X:%02X"
" <- %02X:%02X:%02X:%02X:%02X:%02X\n",
len - desc_hdr_len,
buf[desc_hdr_len], buf[desc_hdr_len + 1],
buf[desc_hdr_len + 2], buf[desc_hdr_len + 3],
buf[desc_hdr_len + 4], buf[desc_hdr_len + 5],
buf[desc_hdr_len + 6], buf[desc_hdr_len + 7],
buf[desc_hdr_len + 8], buf[desc_hdr_len + 9],
buf[desc_hdr_len + 10], buf[desc_hdr_len + 11]);
#endif
used->ring[idx_u].id = ring;
used->ring[idx_u].len = len;
asm volatile ("" : : : "memory");
used->idx++;
intr = true;
if (len > desc_hdr_len) {
pkts[count] = &buf[desc_hdr_len];
pkt_sizes[count] = len - desc_hdr_len;
count++;
if (count == VIRTIO_NET_PKT_BATCH) {
vnet->recv_func (vnet, count, pkts, pkt_sizes,
vnet->recv_param, NULL);
count = 0;
}
}
}
if (count)
vnet->recv_func (vnet, count, pkts, pkt_sizes,
vnet->recv_param, NULL);
if (avail->flags & VIRTQ_AVAIL_F_NO_INTERRUPT)
intr = false;
if (intr)
virtio_net_trigger_interrupt (vnet, 1);
}
/* Receive from guest */
static void
virtio_net_recv (struct virtio_net *vnet)
{
struct virtio_ring *p;
struct vr_desc *desc;
struct vr_avail *avail;
struct vr_used *used;
if (!vnet->ready)
return;
if (vnet->v1) {
uint queue_size;
if (!vnet->queue_enable[1])
return;
queue_size = vnet->queue_size[1];
desc = mapmem_as (vnet->as_dma, vnet->desc[1],
sizeof *desc * queue_size, MAPMEM_WRITE);
avail = mapmem_as (vnet->as_dma, vnet->avail[1],
AVAIL_MAP_SIZE (queue_size), MAPMEM_WRITE);
used = mapmem_as (vnet->as_dma, vnet->used[1],
USED_MAP_SIZE (queue_size), MAPMEM_WRITE);
do_net_recv (vnet, desc, avail, used, vnet->v1_legacy);
unmapmem (used, USED_MAP_SIZE (queue_size));
unmapmem (avail, AVAIL_MAP_SIZE (queue_size));
unmapmem (desc, sizeof *desc * queue_size);
} else {
p = mapmem_as (vnet->as_dma, (u64)vnet->queue[1] << 12,
sizeof *p, MAPMEM_WRITE);
do_net_recv (vnet, p->desc, &p->avail, &p->used, true);
unmapmem (p, sizeof *p);
}
}
static u8
process_ctrl_rx_cmd (struct virtio_net *vnet, u8 *cmd, unsigned int cmd_size)
{
u8 ack = VIRTIO_NET_ACK_OK;
if (cmd_size < 4) {
printf ("virtio_net: invalid command size for "
"VIRTIO_NET_CTRL_RX\n");
ack = VIRTIO_NET_ACK_ERR;
goto end;
}
switch (cmd[1]) {
case VIRTIO_NET_CTRL_RX_PROMISC:
vnet->allow_promisc = !!cmd[2];
if (0)
printf ("virtio_net: allow_promisc %u\n", !!cmd[2]);
break;
case VIRTIO_NET_CTRL_RX_ALLMULTI:
vnet->allow_multicast = !!cmd[2];
if (0)
printf ("virtio_net: allow_multicast %u\n", !!cmd[2]);
break;
default:
printf ("virtio_net: unsupported code %u for "
"VIRTIO_NET_CTRL_RX\n", cmd[1]);
ack = VIRTIO_NET_ACK_ERR;
}
end:
return ack;
}
static u8
process_ctrl_mac_cmd (struct virtio_net *vnet, u8 *cmd, unsigned int cmd_size)
{
u8 ack = VIRTIO_NET_ACK_OK;
u32 uni_n_entries, multi_n_entries, uni_table_size, multi_table_size;
u8 *c;
if (cmd[1] != VIRTIO_NET_CTRL_MAC_TABLE_SET) {
printf ("virtio_net: currently support only "
"VIRTIO_NET_CTRL_MAC_TABLE_SET for "
"VIRTIO_NET_CTRL_MAC\n");
ack = VIRTIO_NET_ACK_ERR;
goto end;
}
/* VIRTIO_NET_CTRL_MAC_TABLE_SET must be at least 11 bytes */
if (cmd_size < 11) {
printf ("virtio_net: invalid command size for "
"VIRTIO_NET_CTRL_MAC_TABLE_SET\n");
ack = VIRTIO_NET_ACK_ERR;
goto end;
}
uni_n_entries = *(u32 *)&cmd[2];
if (uni_n_entries > VIRTIO_NET_CTRL_MAC_TABLE_MAX_ENTRIES) {
printf ("virtio_net: unicast filtering table too large, "
"ignore the command\n");
ack = VIRTIO_NET_ACK_ERR;
goto end;
}
uni_table_size = sizeof vnet->unicast_filter[0] * uni_n_entries;
multi_n_entries = *(u32 *)&cmd[2 + 4 + uni_table_size];
if (multi_n_entries > VIRTIO_NET_CTRL_MAC_TABLE_MAX_ENTRIES) {
printf ("virtio_net: multicast filtering table too large, "
"ignore the command\n");
ack = VIRTIO_NET_ACK_ERR;
goto end;
}
multi_table_size = sizeof vnet->multicast_filter[0] * multi_n_entries;
virtio_net_reset_ctrl_mac (vnet);
c = &cmd[2 + 4];
vnet->unicast_filter_entries = uni_n_entries;
if (uni_table_size)
memcpy (vnet->unicast_filter, c, uni_table_size);
c = &cmd[2 + 4 + uni_table_size + 4];
vnet->multicast_filter_entries = multi_n_entries;
if (multi_table_size)
memcpy (vnet->multicast_filter, c, multi_table_size);
if (0) {
uint i;
printf ("virtio_net: unicast filter %u entries\n",
uni_n_entries);
for (i = 0; i < uni_n_entries; i++) {
printf ("%u: %02X %02X %02X %02X %02X %02X\n", i,
vnet->unicast_filter[i][0],
vnet->unicast_filter[i][1],
vnet->unicast_filter[i][2],
vnet->unicast_filter[i][3],
vnet->unicast_filter[i][4],
vnet->unicast_filter[i][5]);
}
printf ("virtio_net: multicast filter %u entries\n",
multi_n_entries);
for (i = 0; i < multi_n_entries; i++) {
printf ("%u: %02X %02X %02X %02X %02X %02X\n", i,
vnet->multicast_filter[i][0],
vnet->multicast_filter[i][1],
vnet->multicast_filter[i][2],
vnet->multicast_filter[i][3],
vnet->multicast_filter[i][4],
vnet->multicast_filter[i][5]);
}
}
end:
return ack;
}
static u8
process_ctrl_cmd (struct virtio_net *vnet, u8 *cmd, unsigned int cmd_size)
{
u8 ack;
/* Sanity check */
if (cmd_size < 3) {
printf ("virtio_net: ignore possible invalid ctrl command\n");
ack = VIRTIO_NET_ACK_ERR;
goto end;
}
switch (cmd[0]) {
case VIRTIO_NET_CTRL_RX:
ack = process_ctrl_rx_cmd (vnet, cmd, cmd_size);
break;
case VIRTIO_NET_CTRL_MAC:
ack = process_ctrl_mac_cmd (vnet, cmd, cmd_size);
break;
default:
printf ("virtio_net: unsupport class %u\n", cmd[0]);
ack = VIRTIO_NET_ACK_ERR;
}
end:
return ack;
}
static void
do_net_ctrl (struct virtio_net *vnet, struct vr_desc *desc,
struct vr_avail *avail, struct vr_used *used)
{
u16 idx_a, idx_u, ring, queue_size;
u32 len, desc_len, copied;
u32 ring_tmp, d;
u8 *buf_ring, *cmd, ack;
bool intr = false, last;
queue_size = vnet->queue_size[2];
idx_a = avail->idx;
while (idx_a != used->idx) {
idx_u = used->idx % queue_size;
ring = avail->ring[idx_u];
ring_tmp = ((u32)ring << 16) | 1;
/* Ctrl command is variable in size, find the size first */
len = 0;
while (ring_tmp & 1) {
ring_tmp >>= 16;
d = ring_tmp % queue_size;
desc_len = desc[d].len;
len += desc_len;
ring_tmp = desc[d].flags_next;
}
if (len > PAGESIZE) {
printf ("virtio_net: ctrl command size is too large, "
"skip processing\n");
goto skip;
}
/* Merge command scatter-gather buffers into a single buffer */
copied = 0;
cmd = alloc (len);
ring_tmp = ((u32)ring << 16) | 1;
while (ring_tmp & 1) {
ring_tmp >>= 16;
d = ring_tmp % queue_size;
desc_len = desc[d].len;
ring_tmp = desc[d].flags_next;
last = !(ring_tmp & 0x1);
if ((copied + desc_len > len) ||
(last && copied + desc_len != len)) {
printf ("virtio_net: strange ctrl command "
"buffers, skip processing\n");
}
buf_ring = mapmem_as (vnet->as_dma, desc[d].addr,
desc_len, MAPMEM_WRITE);
memcpy (cmd + copied, buf_ring, desc_len);
copied += desc_len;
/* We reach the last buffer, process and set ack */
if (last) {
ack = process_ctrl_cmd (vnet, cmd, len);
buf_ring[desc_len - 1] = ack;
}
unmapmem (buf_ring, desc_len);
}
free (cmd);
skip:
used->ring[idx_u].id = ring;
used->ring[idx_u].len = len;
asm volatile ("" : : : "memory");
used->idx++;
intr = true;
}
if (avail->flags & VIRTQ_AVAIL_F_NO_INTERRUPT)
intr = false;
if (intr)
virtio_net_trigger_interrupt (vnet, 2);
}
static void
virtio_net_ctrl (struct virtio_net *vnet)
{
struct virtio_ring *p;
struct vr_desc *desc;
struct vr_avail *avail;
struct vr_used *used;
if (!vnet->ready)
return;
if (!(vnet->driver_feature & VIRTIO_NET_F_CTRL_VQ))
return;
if (vnet->v1) {
uint queue_size;
if (!vnet->queue_enable[2])
return;
queue_size = vnet->queue_size[2];
desc = mapmem_as (vnet->as_dma, vnet->desc[2],
sizeof *desc * queue_size, MAPMEM_WRITE);
avail = mapmem_as (vnet->as_dma, vnet->avail[2],
AVAIL_MAP_SIZE (queue_size), MAPMEM_WRITE);
used = mapmem_as (vnet->as_dma, vnet->used[2],
USED_MAP_SIZE (queue_size), MAPMEM_WRITE);
do_net_ctrl (vnet, desc, avail, used);
unmapmem (used, USED_MAP_SIZE (queue_size));
unmapmem (avail, AVAIL_MAP_SIZE (queue_size));
unmapmem (desc, sizeof *desc * queue_size);
} else {
p = mapmem_as (vnet->as_dma, (u64)vnet->queue[2] << 12,
sizeof *p, MAPMEM_WRITE);
do_net_ctrl (vnet, p->desc, &p->avail, &p->used);
unmapmem (p, sizeof *p);
}
}
static void
virtio_net_set_recv_callback (void *handle, net_recv_callback_t *callback,
void *param)
{
struct virtio_net *vnet = handle;
vnet->recv_func = callback;
vnet->recv_param = param;
}
static void
ccfg_device_feature_select (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
if (wr)
vnet->device_feature_select = data->dword;
else
data->dword = vnet->device_feature_select;
}
static void
ccfg_device_feature (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
u32 n = vnet->device_feature_select;
if (!wr && n < 2)
data->dword = vnet->device_feature >> (n * 32);
}
static void
ccfg_driver_feature_select (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
if (wr)
vnet->driver_feature_select = data->dword;
else
data->dword = vnet->driver_feature_select;
}
static void
ccfg_driver_feature (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
u32 n = vnet->driver_feature_select;
if (n < 2) {
if (wr)
((u32 *)&vnet->driver_feature)[n] = data->dword;
else
data->dword = vnet->driver_feature >> (n * 32);
}
}
static void
ccfg_msix_config (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
if (wr) {
vnet->msix_cfgvec = data->word;
} else {
data->word = vnet->msix_cfgvec;
}
if (0 && wr)
printf ("cfgvec=%04X\n", data->word);
}
static void
ccfg_num_queues (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
if (!wr)
data->word = VIRTIO_N_QUEUES;
}
static void
eval_status (struct virtio_net *vnet, bool v1, u8 new_status)
{
if (new_status == 0x0) {
printf ("virtio_net: reset\n");
virtio_net_disable_interrupt (vnet);
virtio_net_reset_dev (vnet);
return;
}
if (new_status & VIRTIO_STATUS_FEATURES_OK) {
if (v1 && (vnet->driver_feature & ~vnet->device_feature)) {
printf ("virtio_net: unsupport features found %llX\n",
vnet->driver_feature);
return;
}
if (vnet->driver_feature & VIRTIO_NET_F_CTRL_RX &&
!(vnet->driver_feature & VIRTIO_NET_F_CTRL_VQ)) {
printf ("virtio_net: VIRTIO_NET_F_CTRL_RX requires "
"VIRTIO_NET_F_CTRL_VQ\n");
return;
}
}
if (new_status & VIRTIO_STATUS_DRIVER_OK) {
vnet->v1 = v1;
vnet->v1_legacy = !(vnet->driver_feature & VIRTIO_F_VERSION_1);
if (v1 && vnet->v1_legacy) {
printf ("virtio_net: the guest driver does not accept "
"VIRTIO_F_VERSION_1\n");
printf ("virtio_net: assume that the driver uses "
"legacy header format\n");
}
if (v1 && !(vnet->driver_feature & VIRTIO_F_ACCESS_PLATFORM))
printf ("virtio_net: the guest driver does not accept "
"VIRTIO_F_ACCESS_PLATFORM\n");
vnet->ready = true;
if (!(vnet->cmd & 0x400) || vnet->msix_enabled)
virtio_net_enable_interrupt (vnet);
}
vnet->dev_status |= new_status;
}
static void
device_status (struct virtio_net *vnet, bool wr, bool v1, union mem *data)
{
if (wr)
eval_status (vnet, v1, data->byte);
else
data->byte = vnet->dev_status;
}
static void
legacy_device_status (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
device_status (vnet, wr, false, data);
}
static void
ccfg_device_status (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
device_status (vnet, wr, true, data);
}
static void
ccfg_config_generation (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
if (!wr)
data->byte = 1;
}
static void
ccfg_queue_select (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
if (wr) {
vnet->selected_queue = data->word;
} else {
data->word = vnet->selected_queue;
}
}
static void
ccfg_queue_size (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
u16 n = vnet->selected_queue;
if (n >= VIRTIO_N_QUEUES)
return;
if (wr) {
vnet->queue_size[n] = data->word;
if (vnet->queue_size[n] != VIRTIO_NET_QUEUE_SIZE)
printf ("virtio_net: queue %u size is %u\n",
n, vnet->queue_size[n]);
} else {
data->word = vnet->queue_size[n];
}
}
static void
ccfg_queue_msix_vector (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
u16 n = vnet->selected_queue;
if (n >= VIRTIO_N_QUEUES)
return;
spinlock_lock (&vnet->msix_lock);
if (wr) {
u16 v = data->word;
vnet->msix_quevec[n] = v;
vnet->msix_vector_change (vnet->msix_param, n, ~v ? v : -1);
} else {
data->word = vnet->msix_quevec[n];
}
spinlock_unlock (&vnet->msix_lock);
}
static void
ccfg_queue_enable (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
u16 n = vnet->selected_queue;
if (n >= VIRTIO_N_QUEUES)
return;
if (wr)
vnet->queue_enable[n] = data->word;
else
data->word = vnet->queue_enable[n];
}
static void
ccfg_queue_notify_off (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
if (!wr)
data->word = 0;
}
static void
ccfg_queue_legacy (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
u16 n = vnet->selected_queue;
if (n < VIRTIO_N_QUEUES) {
if (wr)
vnet->queue[n] = data->dword;
else
data->dword = vnet->queue[n];
}
}
static void
do_queue_access (struct virtio_net *vnet, bool wr, union mem *data, u64 *queue)
{
if (wr)
*queue = data->qword;
else
data->qword = *queue;
}
static void
ccfg_queue_desc (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
u16 n = vnet->selected_queue;
if (n < VIRTIO_N_QUEUES)
do_queue_access (vnet, wr, data, &vnet->desc[n]);
}
static void
ccfg_queue_driver (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
u16 n = vnet->selected_queue;
if (n < VIRTIO_N_QUEUES)
do_queue_access (vnet, wr, data, &vnet->avail[n]);
}
static void
ccfg_queue_device (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
u16 n = vnet->selected_queue;
if (n < VIRTIO_N_QUEUES)
do_queue_access (vnet, wr, data, &vnet->used[n]);
}
static void
queue_notify (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
if (wr) {
switch (data->word) {
case 0:
virtio_net_suppress_interrupt (vnet, false);
break;
case 1:
virtio_net_recv (vnet);
break;
case 2:
virtio_net_ctrl (vnet);
break;
}
}
}
static void
isr_status (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
if (!wr) {
if (virtio_net_untrigger_interrupt (vnet))
data->byte = 1;
else
data->byte = 0;
}
}
static void
dcfg_mac_addr (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
if (!wr)
memcpy (data, vnet->macaddr, 6);
}
static enum dres_reg_ret_t
virtio_net_iohandler (const struct dres_reg *m, void *handle, phys_t offset,
bool wr, void *buf, uint len)
{
static const struct handle_io_data d_pin[] = {
{ 4, ccfg_device_feature },
{ 4, ccfg_driver_feature },
{ 4, ccfg_queue_legacy },
{ 2, ccfg_queue_size },
{ 2, ccfg_queue_select },
{ 2, queue_notify },
{ 1, legacy_device_status },
{ 1, isr_status },
{ 6, dcfg_mac_addr },
{ 0, NULL },
};
static const struct handle_io_data d_msix[] = {
{ 4, ccfg_device_feature },
{ 4, ccfg_driver_feature },
{ 4, ccfg_queue_legacy },
{ 2, ccfg_queue_size },
{ 2, ccfg_queue_select },
{ 2, queue_notify },
{ 1, legacy_device_status },
{ 1, isr_status },
{ 2, ccfg_msix_config },
{ 2, ccfg_queue_msix_vector },
{ 6, dcfg_mac_addr },
{ 0, NULL },
};
struct virtio_net *vnet = handle;
/* We have fast paths for queue_notify and isr_status */
if (wr && offset == 0x10 && len == 2)
queue_notify (vnet, wr, buf, NULL);
else if (!wr && offset == 0x13 && len == 1)
isr_status (vnet, wr, buf, NULL);
else
handle_io (vnet, wr, len, offset, buf,
vnet->msix_enabled ? d_msix : d_pin);
return DRES_REG_RET_DONE;
}
static void
handle_common_cfg (struct virtio_net *vnet, bool wr, u32 iosize, u32 offset,
union mem *data)
{
static const struct handle_io_data d[] = {
{ 4, ccfg_device_feature_select },
{ 4, ccfg_device_feature },
{ 4, ccfg_driver_feature_select },
{ 4, ccfg_driver_feature },
{ 2, ccfg_msix_config },
{ 2, ccfg_num_queues },
{ 1, ccfg_device_status },
{ 1, ccfg_config_generation },
{ 2, ccfg_queue_select },
{ 2, ccfg_queue_size },
{ 2, ccfg_queue_msix_vector },
{ 2, ccfg_queue_enable },
{ 2, ccfg_queue_notify_off },
{ 8, ccfg_queue_desc },
{ 8, ccfg_queue_driver },
{ 8, ccfg_queue_device },
{ 0, NULL },
};
handle_io (vnet, wr, iosize, offset, data, d);
}
static void
handle_notify_cfg (struct virtio_net *vnet, bool wr, u32 iosize, u32 offset,
union mem *data)
{
union mem tmp;
if (!wr)
memset (data, 0, iosize);
if (!offset) {
if (wr && iosize == 1) {
tmp.word = data->byte;
data = &tmp;
}
queue_notify (vnet, wr, data, NULL);
}
}
static void
handle_isr_cfg (struct virtio_net *vnet, bool wr, u32 iosize, u32 offset,
union mem *data)
{
if (!wr)
memset (data, 0, iosize);
if (!offset)
isr_status (vnet, wr, data, NULL);
}
static void
handle_dev_cfg (struct virtio_net *vnet, bool wr, u32 iosize, u32 offset,
union mem *data)
{
static const struct handle_io_data d[] = {
{ 6, dcfg_mac_addr },
{ 0, NULL },
};
handle_io (vnet, wr, iosize, offset, data, d);
}
static void
handle_msix (struct virtio_net *vnet, bool wr, u32 iosize, u32 offset,
union mem *data)
{
if (!wr)
memset (data, 0, iosize);
if (!vnet->msix)
return;
spinlock_lock (&vnet->msix_lock);
if (offset < sizeof vnet->msix_table_entry) {
void *p = vnet->msix_table_entry;
u32 end = offset + iosize;
if (end > sizeof vnet->msix_table_entry)
end = sizeof vnet->msix_table_entry;
if (wr)
memcpy (p + offset, data, end - offset);
else
memcpy (data, p + offset, end - offset);
if (0 && wr)
printf ("MSI-X[0x%04X] = 0x%08X\n", offset,
data->dword & ((2u << (iosize * 8 - 1)) - 1));
} else if (offset <= VIRTIO_NET_MSIX_TAB_LEN &&
offset + iosize > VIRTIO_NET_MSIX_TAB_LEN) {
/* Pending bits: not yet implemented */
}
if (wr)
vnet->msix_mmio_update (vnet->msix_param);
spinlock_unlock (&vnet->msix_lock);
}
static void
do_handle_mmio (struct virtio_net *vnet, phys_t offset, bool wr, void *data,
uint iosize)
{
static const v1_handler_t m[] = {
handle_common_cfg,
handle_notify_cfg,
handle_isr_cfg,
handle_dev_cfg,
handle_msix,
};
void *d = data;
u32 i = offset / VIRTIO_CFG_SIZE;
u32 mmio_offset = offset % VIRTIO_CFG_SIZE;
u32 accessible_size;
while (i < sizeof m / sizeof m[0]) {
m[i] (vnet, wr, iosize, mmio_offset, d);
accessible_size = VIRTIO_CFG_SIZE - mmio_offset;
if (iosize <= accessible_size)
return;
i++;
d += accessible_size;
iosize -= accessible_size;
mmio_offset = 0;
}
if (!wr && iosize)
memset (d, iosize, 0);
}
static enum dres_reg_ret_t
virtio_net_mmio (const struct dres_reg *m, void *handle, phys_t offset,
bool wr, void *buf, uint iosize)
{
struct virtio_net *vnet = handle;
do_handle_mmio (vnet, offset, wr, buf, iosize);
return DRES_REG_RET_DONE;
}
static void
pcie_config_access (struct virtio_net *vnet, bool wr, u32 iosize, u32 offset,
void *data)
{
struct pci_device *dev = vnet->dev;
offset += PCI_CONFIG_REGS8_NUM;
if (dev) {
if (wr)
pci_handle_default_config_write (dev, iosize,
offset, data);
else
pci_handle_default_config_read (dev, iosize,
offset, data);
} else {
if (!wr)
memset (data, 0, iosize);
}
}
static void
pci_handle_default (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
const struct virtio_pci_regs32 *r = extra_info;
struct pci_device *dev = vnet->dev;
union mem v;
u32 offset, mask;
if (!wr) {
data->dword = r->initial_val;
mask = r->mask;
if (dev && mask) {
offset = (r - vnet_pci_initial_val) * sizeof v.dword;
pci_handle_default_config_read (dev, sizeof v.dword,
offset, &v);
v.dword &= mask;
data->dword &= ~mask;
data->dword |= v.dword;
}
}
}
static void
pci_handle_cmd (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
if (wr) {
vnet->cmd = data->dword;
if (!vnet->msix_enabled && (vnet->cmd & 0x400))
virtio_net_disable_interrupt (vnet);
} else {
pci_handle_default (vnet, wr, data, extra_info);
data->dword |= (vnet->cmd & 0x407) |
0x100000; /* Capabilities */
}
}
static void
pci_handle_multifunction (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
if (!wr) {
pci_handle_default (vnet, wr, data, extra_info);
data->dword |= vnet->multifunction << 23;
}
}
static void
pci_handle_ioport_write (struct virtio_net *vnet, phys_t new_ioport, uint size)
{
enum dres_err_t err;
if (vnet->prev_port) {
dres_reg_unregister_handler (vnet->r_io);
dres_reg_free (vnet->r_io);
vnet->r_io = NULL;
}
printf ("virtio_net hook 0x%04llX\n", new_ioport);
vnet->r_io = dres_reg_alloc (new_ioport, size, DRES_REG_TYPE_IO,
pci_dres_reg_translate, vnet->dev, 0);
err = dres_reg_register_handler (vnet->r_io, virtio_net_iohandler,
vnet);
if (err != DRES_ERR_NONE)
panic ("%s(): fail to register IO handler", __func__);
vnet->prev_port = vnet->port;
}
static void
pci_handle_ioport (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
if (wr) {
vnet->port = data->dword;
if ((vnet->port | 0x1F) != 0x1F &&
(vnet->port | 0x1F) < 0xFFFF) {
if (vnet->prev_port != vnet->port) {
pci_handle_ioport_write (vnet,
vnet->port & ~0x1F,
0x20);
vnet->prev_port = vnet->port;
}
}
} else {
data->dword = (vnet->port & ~0x1F) |
PCI_CONFIG_BASE_ADDRESS_IOSPACE;
}
}
static void
pci_handle_mmio (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
struct dres_reg *old_m;
enum dres_err_t err;
if (wr) {
u32 new_base = data->dword & ~(vnet->mmio_len - 1);
if (new_base == ~(vnet->mmio_len - 1)) {
vnet->mmio_base_emul_1 = true;
vnet->mmio_base_emul = true;
} else if (!new_base) {
vnet->mmio_base_emul_1 = false;
vnet->mmio_base_emul = true;
} else {
vnet->mmio_base_emul = false;
if (vnet->mmio_base == new_base)
return;
vnet->mmio_base = new_base;
old_m = vnet->r_mm;
if (old_m)
dres_reg_unregister_handler (old_m);
vnet->r_mm = dres_reg_alloc (vnet->mmio_base,
vnet->mmio_len,
DRES_REG_TYPE_MM,
pci_dres_reg_translate,
vnet->dev, 0);
if (vnet->mmio_change) {
struct pci_bar_info bar;
bar.type = PCI_BAR_INFO_TYPE_MEM;
bar.base = new_base;
bar.len = vnet->mmio_len;
vnet->mmio_change (vnet->mmio_param, &bar,
vnet->r_mm);
}
err = dres_reg_register_handler (vnet->r_mm,
virtio_net_mmio,
vnet);
if (err != DRES_ERR_NONE)
panic ("%s(): fail to register MMIO handler",
__func__);
if (old_m)
dres_reg_free (old_m);
}
} else {
data->dword = vnet->mmio_base_emul ?
vnet->mmio_base_emul_1 ? ~(vnet->mmio_len - 1) : 0 :
vnet->mmio_base;
}
}
static void
pci_handle_next (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
if (!wr)
data->dword = vnet->msix ?
VIRTIO_MSIX_CAP_OFFSET :
VIRTIO_COMMON_CFG_CAP_OFFSET;
}
static void
pci_handle_msix (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
if (!wr)
data->dword = 0;
if (!vnet->msix)
return;
if (wr) {
bool prev_msix_enabled = vnet->msix_enabled;
u8 d = (&data->byte)[3];
vnet->msix_enabled = !!(d & 0x80);
vnet->msix_mask = !!(d & 0x40);
if (1)
printf ("MSI-X Config [0x%04X] 0x%02X /%d,%d\n",
VIRTIO_MSIX_CAP_OFFSET + 3, d,
vnet->msix_enabled, vnet->msix_mask);
if (!vnet->msix_enabled && (vnet->cmd & 0x400))
virtio_net_disable_interrupt (vnet);
if (prev_msix_enabled != vnet->msix_enabled) {
if (prev_msix_enabled)
vnet->msix_disable (vnet->msix_param);
else
vnet->msix_enable (vnet->msix_param);
}
} else {
data->dword = 0x11 | VIRTIO_COMMON_CFG_CAP_OFFSET << 8 |
(2 | (vnet->msix_enabled ? 0x8000 : 0) |
(vnet->msix_mask ? 0x4000 : 0)) << 16;
}
}
static void
pci_handle_pci_cfg_next (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
if (!wr)
data->dword = VIRTIO_CAP_1ST_DWORD (vnet->next_ext_cap_offset,
4, VIRTIO_PCI_CAP_PCI_CFG);
}
static void
pci_handle_pci_cfg_bar (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
if (wr)
vnet->pci_cfg.cap.bar = data->byte;
else
data->dword = vnet->pci_cfg.cap.bar;
}
static void
pci_handle_pci_cfg_offset (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
if (wr)
vnet->pci_cfg.cap.offset = data->dword;
else
data->dword = vnet->pci_cfg.cap.offset;
}
static void
pci_handle_pci_cfg_length (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
if (wr)
vnet->pci_cfg.cap.length = data->dword;
else
data->dword = vnet->pci_cfg.cap.length;
}
static void
pci_handle_pci_cfg_data (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
u32 length = vnet->pci_cfg.cap.length;
if (vnet->pci_cfg.cap.bar == VIRTIO_MMIO_BAR && length - 1 < 4) {
phys_t offset = vnet->pci_cfg.cap.offset;
do_handle_mmio (vnet, offset, wr, data, length);
}
}
static void
pci_handle_ext_cap (struct virtio_net *vnet, bool wr, union mem *data,
const void *extra_info)
{
const struct virtio_ext_cap *ext_cap = extra_info;
struct pci_device *dev = vnet->dev;
if (!wr)
data->dword = 0;
if (!dev || !ext_cap->offset)
return;
if (wr) {
pci_handle_default_config_write (dev, sizeof data->dword,
ext_cap->offset, data);
} else {
pci_handle_default_config_read (dev, sizeof data->dword,
ext_cap->offset, data);
if (ext_cap->replace_next) {
data->dword &= 0xFFFF00FF;
data->dword |= (ext_cap->new_next << 8);
}
}
}
static void
do_handle_config (struct virtio_net *vnet, u8 iosize, u16 offset, bool wr,
union mem *data)
{
/* Readjust offset for optimization */
u32 i = OFFSET_TO_DWORD_BLOCK (offset);
if (i > PCI_CONFIG_REGS32_NUM)
i = PCI_CONFIG_REGS32_NUM;
offset = offset - i * sizeof (u32);
handle_io_with_default (vnet, wr, iosize, offset, data,
vnet_pci_data + i, pcie_config_access);
}
void
virtio_net_handle_config_read (void *handle, u8 iosize, u16 offset,
union mem *data)
{
struct virtio_net *vnet = handle;
do_handle_config (vnet, iosize, offset, false, data);
}
void
virtio_net_handle_config_write (void *handle, u8 iosize, u16 offset,
union mem *data)
{
struct virtio_net *vnet = handle;
do_handle_config (vnet, iosize, offset, true, data);
}
void
virtio_net_set_multifunction (void *handle, int enable)
{
struct virtio_net *vnet = handle;
vnet->multifunction = enable;
}
struct msix_table *
virtio_net_set_msix (void *handle,
void (*msix_disable) (void *msix_param),
void (*msix_enable) (void *msix_param),
void (*msix_vector_change) (void *msix_param,
unsigned int queue,
int vector),
void (*msix_generate) (void *msix_param,
unsigned int queue),
void (*msix_mmio_update) (void *msix_param),
void *msix_param)
{
struct virtio_net *vnet = handle;
vnet->msix_enable = msix_enable;
vnet->msix_disable = msix_disable;
vnet->msix_vector_change = msix_vector_change;
vnet->msix_generate = msix_generate;
vnet->msix_mmio_update = msix_mmio_update;
vnet->msix_param = msix_param;
vnet->msix = true;
return vnet->msix_table_entry;
}
void
virtio_net_set_pci_device (void *handle, struct pci_device *dev,
struct pci_bar_info *initial_bar_info,
struct dres_reg *initial_r,
void (*mmio_change) (void *mmio_param,
struct pci_bar_info *bar_info,
struct dres_reg *new_r),
void *mmio_param)
{
struct virtio_net *vnet = handle;
enum dres_err_t err;
vnet->dev = dev;
if (initial_bar_info) {
vnet->mmio_base = initial_bar_info->base;
vnet->mmio_len = initial_bar_info->len;
if (~(vnet->mmio_len - 1) != vnet->mmio_base) {
vnet->r_mm = initial_r;
err = dres_reg_register_handler (initial_r,
virtio_net_mmio,
vnet);
if (err != DRES_ERR_NONE)
panic ("%s(): fail to register MMIO handler",
__func__);
}
}
vnet->mmio_change = mmio_change;
vnet->mmio_param = mmio_param;
}
static void
initialize_vnet_pci_data (struct virtio_net *vnet)
{
static bool init_done;
uint i;
uint ext_start;
if (init_done)
return;
ext_start = VIRTIO_EXT_CAP_DWORD_BLOCK;
for (i = 0; i < ext_start; i++) {
vnet_pci_data[i].size = 4;
vnet_pci_data[i].handler = pci_handle_default;
vnet_pci_data[i].extra_info = &vnet_pci_initial_val[i];
}
for (i = ext_start; i < PCI_CONFIG_REGS32_NUM; i++) {
vnet_pci_data[i].size = 4;
vnet_pci_data[i].handler = pci_handle_ext_cap;
vnet_pci_data[i].extra_info = &vnet->ext_caps[i - ext_start];
}
vnet_pci_data[1].handler = pci_handle_cmd;
vnet_pci_data[3].handler = pci_handle_multifunction;
vnet_pci_data[4].handler = pci_handle_ioport;
vnet_pci_data[6].handler = pci_handle_mmio;
vnet_pci_data[13].handler = pci_handle_next;
i = OFFSET_TO_DWORD_BLOCK (VIRTIO_MSIX_CAP_OFFSET);
vnet_pci_data[i].handler = pci_handle_msix;
i = OFFSET_TO_DWORD_BLOCK (VIRTIO_PCI_CFG_CAP_OFFSET);
vnet_pci_data[i].handler = pci_handle_pci_cfg_next;
vnet_pci_data[i + 1].handler = pci_handle_pci_cfg_bar;
vnet_pci_data[i + 2].handler = pci_handle_pci_cfg_offset;
vnet_pci_data[i + 3].handler = pci_handle_pci_cfg_length;
vnet_pci_data[i + 4].handler = pci_handle_pci_cfg_data;
init_done = true;
}
void *
virtio_net_init (struct nicfunc **func, u8 *macaddr,
const struct mm_as *as_dma,
void (*intr_clear) (void *intr_param),
void (*intr_set) (void *intr_param),
void (*intr_disable) (void *intr_param),
void (*intr_enable) (void *intr_param),
void *intr_param)
{
static struct nicfunc virtio_net_func = {
.get_nic_info = virtio_net_get_nic_info,
.send = virtio_net_send,
.set_recv_callback = virtio_net_set_recv_callback,
};
struct virtio_net *vnet;
uint i;
vnet = alloc (sizeof *vnet);
vnet->prev_port = 0;
vnet->port = 0x5000;
vnet->cmd = 0x7; /* Interrupts should not be masked here
because apparently OS X does not
unmask interrupts. */
vnet->mmio_base = 0xFFFFF000;
vnet->mmio_len = 0x1000;
vnet->device_feature = VIRTIO_NET_DEVICE_FEATURES;
vnet->r_mm = NULL;
vnet->r_io = NULL;
vnet->mmio_change = NULL;
vnet->mmio_base_emul = false;
vnet->macaddr = macaddr;
vnet->dev = NULL;
/*
* For legacy virtio_net drivers, we should use physical addresses.
* However, macOS seems to always use virtual addresses even though
* VIRTIO_F_ACCESS_PLATFORM is not negotiated. Setting vnet->as_dma
* like this is a workaround for macOS (Assuming that modern
* virtio_net drivers supports v1.1 implementation).
*/
vnet->as_dma = as_dma;
vnet->intr_clear = intr_clear;
vnet->intr_set = intr_set;
vnet->intr_disable = intr_disable;
vnet->intr_enable = intr_enable;
vnet->intr_param = intr_param;
vnet->last_time = 0;
vnet->multifunction = 0;
vnet->intr_suppress = false;
vnet->intr_suppress_running = 0;
spinlock_init (&vnet->intr_suppress_lock);
vnet->intr_enabled = false;
vnet->intr = false;
vnet->msix = false;
vnet->msix_cfgvec = 0xFFFF;
vnet->msix_enabled = false;
vnet->msix_mask = false;
memset (&vnet->ext_caps, 0, sizeof vnet->ext_caps);
vnet->next_ext_cap = VIRTIO_EXT_CAP_OFFSET;
vnet->next_ext_cap_offset = 0;
vnet->pcie_cap = false;
memset (&vnet->pci_cfg, 0, sizeof vnet->pci_cfg);
memset (&vnet->msix_table_entry, 0, sizeof vnet->msix_table_entry);
spinlock_init (&vnet->msix_lock);
for (i = 0; i < VIRTIO_N_QUEUES; i++) {
vnet->msix_quevec[i] = 0xFFFF;
vnet->msix_table_entry[i].mask = 1;
}
virtio_net_reset_dev (vnet);
*func = &virtio_net_func;
initialize_vnet_pci_data (vnet);
return vnet;
}
bool
virtio_net_add_cap (void *handle, u8 cap_start, u8 size)
{
struct virtio_net *vnet = handle;
uint n_blocks = (size + sizeof (u32) - 1) / sizeof (u32);
uint aligned_size = n_blocks * sizeof (u32);
u32 start, i;
if (vnet->next_ext_cap + aligned_size > PCI_CONFIG_REGS8_NUM ||
cap_start < 0x40)
return false;
start = (vnet->next_ext_cap - VIRTIO_EXT_CAP_OFFSET) / sizeof (u32);
vnet->ext_caps[start].replace_next = true;
vnet->ext_caps[start].new_next = vnet->next_ext_cap_offset;
for (i = 0; i < n_blocks; i++)
vnet->ext_caps[start + i].offset = cap_start +
i * sizeof (u32);
vnet->next_ext_cap_offset = vnet->next_ext_cap;
vnet->next_ext_cap += aligned_size;
return true;
}
void
virtio_net_unregister_handler (void *handle)
{
struct virtio_net *vnet = handle;
if (vnet->prev_port) {
vnet->prev_port = 0;
dres_reg_unregister_handler (vnet->r_io);
dres_reg_free (vnet->r_io);
vnet->r_io = NULL;
}
if (vnet->r_mm) {
dres_reg_unregister_handler (vnet->r_mm);
vnet->r_mm = NULL;
}
}
struct dres_reg *
virtio_net_suspend (void *handle)
{
struct virtio_net *vnet = handle;
struct dres_reg *r_to_free;
r_to_free = vnet->r_mm;
if (r_to_free) {
dres_reg_unregister_handler (r_to_free);
vnet->r_mm = NULL;
}
return r_to_free;
}
void
virtio_net_resume (void *handle, struct dres_reg *initial_r)
{
struct virtio_net *vnet = handle;
enum dres_err_t err;
vnet->r_mm = initial_r;
err = dres_reg_register_handler (initial_r, virtio_net_mmio, vnet);
if (err != DRES_ERR_NONE)
panic ("%s(): fail to register MMIO handler", __func__);
}
</document_content>
</document>
<document index="2">
<source>./drivers/net/virtio_net.h</source>
<document_content>
/*
* Copyright (c) 2007, 2008 University of Tsukuba
* Copyright (c) 2015 Igel Co., Ltd
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
* 3. Neither the name of the University of Tsukuba nor the names of its
* contributors may be used to endorse or promote products derived from
* this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
#include <core/types.h>
struct mm_as;
struct nicfunc;
struct pci_bar_info;
struct pci_device;
struct dres_reg;
#ifdef VIRTIO_NET
void virtio_net_handle_config_read (void *handle, u8 iosize, u16 offset,
union mem *data);
void virtio_net_handle_config_write (void *handle, u8 iosize, u16 offset,
union mem *data);
void virtio_net_set_multifunction (void *handle, int enable);
struct msix_table *
virtio_net_set_msix (void *handle,
void (*msix_disable) (void *msix_param),
void (*msix_enable) (void *msix_param),
void (*msix_vector_change) (void *msix_param,
unsigned int queue,
int vector),
void (*msix_generate) (void *msix_param,
unsigned int queue),
void (*msix_mmio_update) (void *msix_param),
void *msix_param);
void virtio_net_set_pci_device (void *handle, struct pci_device *dev,
struct pci_bar_info *initial_bar_info,
struct dres_reg *initial_r,
void (*mmio_change) (void *mmio_param,
struct pci_bar_info
*bar_info,
struct dres_reg *new_r),
void *mmio_param);
void *virtio_net_init (struct nicfunc **func, u8 *macaddr,
const struct mm_as *as_dma,
void (*intr_clear) (void *intr_param),
void (*intr_set) (void *intr_param),
void (*intr_disable) (void *intr_param),
void (*intr_enable) (void *intr_param),
void *intr_param);
bool virtio_net_add_cap (void *handle, u8 cap_start, u8 size);
void virtio_net_unregister_handler (void *handle);
struct dres_reg *virtio_net_suspend (void *handle);
void virtio_net_resume (void *handle, struct dres_reg *initial_r);
#else
static inline void
virtio_net_handle_config_read (void *handle, u8 iosize, u16 offset,
union mem *data)
{
}
static inline void
virtio_net_handle_config_write (void *handle, u8 iosize, u16 offset,
union mem *data)
{
}
static inline void
virtio_net_set_multifunction (void *handle, int enable)
{
}
static inline struct msix_table *
virtio_net_set_msix (void *handle,
void (*msix_disable) (void *msix_param),
void (*msix_enable) (void *msix_param),
void (*msix_vector_change) (void *msix_param,
unsigned int queue,
int vector),
void (*msix_generate) (void *msix_param,
unsigned int queue),
void (*msix_mmio_update) (void *msix_param),
void *msix_param)
{
return NULL;
}
static inline void
virtio_net_set_pci_device (void *handle, struct pci_device *dev,
struct pci_bar_info *initial_bar_info,
struct dres_reg *initial_r,
void (*mmio_change) (void *mmio_param,
struct pci_bar_info *bar_info,
struct dres_reg *new_r),
void *mmio_param)
{
}
static inline void *
virtio_net_init (struct nicfunc **func, u8 *macaddr,
const struct mm_as *as_dma,
void (*intr_clear) (void *intr_param),
void (*intr_set) (void *intr_param),
void (*intr_disable) (void *intr_param),
void (*intr_enable) (void *intr_param), void *intr_param)
{
return NULL;
}
static inline bool
virtio_net_add_cap (void *handle, u8 cap_start, u8 size)
{
return false;
}
static inline void
virtio_net_unregister_handler (void *handle)
{
}
static inline struct dres_reg *
virtio_net_suspend (void *handle)
{
return NULL;
}
void
virtio_net_resume (void *handle, struct dres_reg *initial_r)
{
}
#endif
</document_content>
</document>
<document index="3">
<source>./drivers/net/pro100.c</source>
<document_content>
/*
* Copyright (c) 2007, 2008 University of Tsukuba
* Copyright (C) 2007, 2008
* National Institute of Information and Communications Technology
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
* 3. Neither the name of the University of Tsukuba nor the names of its
* contributors may be used to endorse or promote products derived from
* this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
#include <core.h>
#include <core/dres.h>
#include <core/mm.h>
#include <core/time.h>
#include <net/netapi.h>
#include <pci.h>
#include <pci_vtd_trans.h>
#include <Se/Se.h> /* Se header needs to be here */
#if defined (__i386__) || defined (__x86_64__)
#include "../../core/x86/beep.h" /* DEBUG */
#endif
#include "pro100.h"
#define SeCopy memcpy
#define SeZero(addr, len) memset (addr, 0, len)
#define SeZeroMalloc(len) memset (alloc (len), 0, len)
#define SeMalloc alloc
#define SeFree free
//#define PRO100_PASS_MODE
//#define debugprint(fmt, args...) printf(fmt, ## args); pro100_sleep(70);
#define debugprint(fmt, args...) do if (0) printf (fmt, ## args); while (0)
static PRO100_CTX *pro100_ctx = NULL;
static const char driver_name[] = "pro100";
static const char driver_longname[] = "VPN for Intel PRO/100";
static void
GetPhysicalNicInfo (void *handle, struct nicinfo *info)
{
info->mtu = 1500;
info->media_speed = 1000000000;
SeCopy (info->mac_address, pro100_get_ctx()->mac_address, 6);
}
static void
SendPhysicalNic (void *handle, unsigned int num_packets, void **packets,
unsigned int *packet_sizes, bool print_ok)
{
if (true)
{
UINT i;
for (i = 0;i < num_packets;i++)
{
void *data = packets[i];
UINT size = packet_sizes[i];
pro100_send_packet_to_line (handle, data, size);
}
}
}
static void
SetPhysicalNicRecvCallback (void *handle, net_recv_callback_t *callback,
void *param)
{
if (true)
{
PRO100_CTX *ctx = handle;
ctx->CallbackRecvPhyNic = callback;
ctx->CallbackRecvPhyNicParam = param;
}
}
static void
GetVirtualNicInfo (void *handle, struct nicinfo *info)
{
info->mtu = 1500;
info->media_speed = 1000000000;
SeCopy(info->mac_address, pro100_get_ctx()->mac_address, 6);
}
static void
SendVirtualNic (void *handle, unsigned int num_packets, void **packets,
unsigned int *packet_sizes, bool print_ok)
{
if (true)
{
UINT i;
for (i = 0;i < num_packets;i++)
{
void *data = packets[i];
UINT size = packet_sizes[i];
pro100_write_recv_packet (handle, data, size);
}
}
}
static void
SetVirtualNicRecvCallback (void *handle, net_recv_callback_t *callback,
void *param)
{
if (true)
{
PRO100_CTX *ctx = handle;
ctx->CallbackRecvVirtNic = callback;
ctx->CallbackRecvVirtNicParam = param;
}
}
static struct nicfunc phys_func = {
.get_nic_info = GetPhysicalNicInfo,
.send = SendPhysicalNic,
.set_recv_callback = SetPhysicalNicRecvCallback,
}, virt_func = {
.get_nic_info = GetVirtualNicInfo,
.send = SendVirtualNic,
.set_recv_callback = SetVirtualNicRecvCallback,
};
PRO100_CTX *pro100_get_ctx()
{
if (pro100_ctx == NULL)
{
debugprint("Error: No PRO/100 Devices!\n");
pro100_beep(1234, 10000);
while (true);
}
return pro100_ctx;
}
static void
mmio_gphys_access (phys_t gphysaddr, bool wr, void *buf, uint len, u32 flags)
{
void *p;
if (!len)
return;
p = mapmem_as (as_passvm, gphysaddr, len,
(wr ? MAPMEM_WRITE : 0) | flags);
ASSERT (p);
if (wr)
memcpy (p, buf, len);
else
memcpy (buf, p, len);
unmapmem (p, len);
}
// VPN Client の初期化
void pro100_init_vpn_client(PRO100_CTX *ctx)
{
// 引数チェック
if (ctx == NULL)
{
return;
}
if (ctx->vpn_inited)
{
// すでに初期化されている
return;
}
// VPN Client の初期化
//ctx->vpn_handle = VPN_IPsec_Client_Start((SE_HANDLE)ctx, (SE_HANDLE)ctx, "config.txt");
net_init (ctx->net_handle, ctx, &phys_func, ctx, &virt_func);
net_start (ctx->net_handle);
ctx->vpn_inited = true;
}
// BEEP 再生
void pro100_beep(UINT freq, UINT msecs)
{
#if defined (__i386__) || defined (__x86_64__)
beep_on();
beep_set_freq(freq);
pro100_sleep(msecs);
beep_off();
#endif
}
// スリープ
void pro100_sleep(UINT msecs)
{
UINT64 tick;
if (msecs == 0)
{
return;
}
tick = get_time () + msecs * 1000;
while (get_time () <= tick);
}
// ページの確保
void *pro100_alloc_page(phys_t *ptr)
{
void *vptr;
void *vptr2;
phys_t pptr;
alloc_page(&vptr, &pptr);
vptr2 = mapmem_hphys(pptr, PAGESIZE, MAPMEM_WRITE | MAPMEM_UC);
*ptr = pptr;
return vptr2;
}
// ページの解放
void pro100_free_page(void *v, phys_t ptr)
{
unmapmem(v, PAGESIZE);
free_page_phys(ptr);
}
// CU のベースアドレスの初期化を実行
void pro100_init_cu_base_addr(PRO100_CTX *ctx)
{
// 引数チェック
if (ctx == NULL)
{
return;
}
if (ctx->cu_base_inited)
{
return;
}
ctx->cu_base_inited = true;
pro100_wait_cu_ru_accepable(ctx);
pro100_write(ctx, PRO100_CSR_OFFSET_SCB_GENERAL_POINTER, 0, 4);
pro100_write(ctx, PRO100_CSR_OFFSET_SCB_COMMAND_WORD_0,
PRO100_MAKE_CU_RU_COMMAND(PRO100_CU_CMD_LOAD_CU_BASE, PRO100_RU_CMD_NOOP),
1);
pro100_wait_cu_ru_accepable(ctx);
}
// RU のベースアドレスの初期化を実行
void pro100_init_ru_base_addr(PRO100_CTX *ctx)
{
// 引数チェック
if (ctx == NULL)
{
return;
}
if (ctx->ru_base_inited)
{
return;
}
ctx->ru_base_inited = true;
pro100_wait_cu_ru_accepable(ctx);
pro100_write(ctx, PRO100_CSR_OFFSET_SCB_GENERAL_POINTER, 0, 4);
pro100_write(ctx, PRO100_CSR_OFFSET_SCB_COMMAND_WORD_0,
PRO100_MAKE_CU_RU_COMMAND(PRO100_CU_CMD_NOOP, PRO100_RU_CMD_LOAD_RU_BASE),
1);
pro100_wait_cu_ru_accepable(ctx);
}
// パケットを受信したことにしてゲスト OS に渡す
void pro100_write_recv_packet(PRO100_CTX *ctx, void *buf, UINT size)
{
PRO100_RFD rfd;
phys_t ptr;
// 引数チェック
if (ctx == NULL || buf == NULL || size == 0)
{
return;
}
if (ctx->guest_rfd_current == 0)
{
// ゲスト OS が RFD のアドレスを指定していない
return;
}
/*
if (ctx->guest_ru_suspended)
{
// ゲスト OS によって RU がサスペンドされている
return;
}*/
// 現在の RFD バッファの内容をチェックする
ptr = ctx->guest_rfd_current;
pro100_mem_read(&rfd, ptr, sizeof(PRO100_RFD));
if (rfd.eof)
{
// EOF ビットが有効になっているためこの RFD にはデータを受信できない
return;
}
// RFD にデータを書き込む
rfd.status = 0;
rfd.sf = rfd.h = 0;
SeCopy(rfd.data, buf, size);
rfd.recved_bytes = size;
rfd.f = 1;
rfd.ok = 1;
rfd.c = 1;
rfd.eof = 1;
// ビットチェック
if (rfd.el)
{
// これが最後の RFD である
ctx->guest_rfd_current = ctx->guest_rfd_first = 0;
}
else
{
// 次の RFD のリンクアドレスを取得する
ctx->guest_rfd_current = rfd.link_address;
if (ctx->guest_rfd_current == 0xffffffff)
{
ctx->guest_rfd_current = 0;
}
if (rfd.s)
{
ctx->guest_ru_suspended = true;
}
}
pro100_mem_write(ptr, &rfd, sizeof(PRO100_RFD));
// 割り込み発生
pro100_generate_int(ctx);
}
// RU をポーリングする
void pro100_poll_ru(PRO100_CTX *ctx)
{
bool b = false;
// 引数チェック
if (ctx == NULL)
{
return;
}
LABEL_LOOP:
pro100_init_ru_base_addr(ctx);
if (ctx->host_ru_started == false)
{
// RU を開始する
ctx->host_ru_started = true;
pro100_wait_cu_ru_accepable(ctx);
pro100_write(ctx, PRO100_CSR_OFFSET_SCB_GENERAL_POINTER, (UINT)ctx->first_recv->rfd_ptr, 4);
pro100_write(ctx, PRO100_CSR_OFFSET_SCB_COMMAND_WORD_0,
PRO100_MAKE_CU_RU_COMMAND(PRO100_CU_CMD_NOOP, PRO100_RU_CMD_START),
1);
pro100_flush(ctx);
pro100_wait_cu_ru_accepable(ctx);
}
// 新しいパケットが到着しているかどうか確認する
while (true)
{
if (ctx->current_recv->rfd->c)
{
UCHAR *data;
UINT size;
// パケットが到着した
data = &ctx->current_recv->rfd->data[0];
size = ctx->current_recv->rfd->recved_bytes;
if (size >= 1 && size <= PRO100_MAX_PACKET_SIZE)
{
// パケット受信完了
#ifdef PRO100_PASS_MODE
pro100_write_recv_packet(ctx, data, size);
#else // PRO100_PASS_MODE
if (ctx->CallbackRecvPhyNic != NULL)
{
void *packet_data[1];
UINT packet_size[1];
packet_data[0] = data;
packet_size[0] = size;
ctx->CallbackRecvPhyNic(ctx, 1, packet_data, packet_size, ctx->CallbackRecvPhyNicParam, NULL);
}
#endif // PRO100_PASS_MODE
}
ctx->current_recv = ctx->current_recv->next_recv;
if (ctx->current_recv == NULL)
{
// 最後の受信バッファまで受信が完了したのでバッファをリセットする
pro100_init_recv_buffer(ctx);
ctx->host_ru_started = false;
b = true;
break;
}
}
else
{
break;
}
}
if (b)
{
b = false;
goto LABEL_LOOP;
}
}
// CU にオペレーションを実行させる
void pro100_exec_cu_op(PRO100_CTX *ctx, PRO100_OP_BLOCK_MAX *op, UINT size)
{
volatile PRO100_OP_BLOCK *b;
PRO100_OP_BLOCK *b2;
phys_t ptr;
bool src_el, src_s, src_i;
UINT src_link_addr;
bool timeouted;
UINT64 start_tick;
// 引数チェック
if (ctx == NULL || op == NULL)
{
return;
}
// メモリ確保
b = pro100_alloc_page(&ptr);
// 一時領域にコピー
SeCopy((void *)b, op, size);
// バックアップ
src_el = b->el;
src_s = b->s;
src_i = b->i;
src_link_addr = b->link_address;
// フラグセット
b->el = true;
b->s = false;
//b->ok = b->c = false;
b->link_address = ptr;
//b->i = false;
pro100_init_cu_base_addr(ctx);
pro100_wait_cu_ru_accepable(ctx);
if (false)
{
char tmp[4096];
SeBinToStrEx(tmp, sizeof(tmp), (void *)b, size);
debugprint("%s\n", tmp);
}
pro100_write(ctx, PRO100_CSR_OFFSET_SCB_GENERAL_POINTER, (UINT)ptr, 4);
pro100_write(ctx, PRO100_CSR_OFFSET_SCB_COMMAND_WORD_0,
PRO100_MAKE_CU_RU_COMMAND(PRO100_CU_CMD_START, PRO100_RU_CMD_NOOP), 1);
pro100_flush(ctx);
// NIC がコマンドを完了するまで待機
//debugprint("[");
start_tick = get_time ();
timeouted = true;
while ((start_tick + 1000000ULL) >= get_time ())
{
if (b->c)
{
timeouted = false;
break;
}
}
//debugprint("%u] ", b->c);
//b->c = true;
if (false)
{
UINT t = pro100_read(ctx, PRO100_CSR_OFFSET_SCB_STATUS_WORD_0, 1);
PRO100_SCB_STATUS_WORD_BIT *b = (PRO100_SCB_STATUS_WORD_BIT *)(void *)&t;
debugprint("STATUS CU=%u, RU=%u\n", b->cu_status, b->ru_status);
}
// 結果を書き戻す
SeCopy(op, (void *)b, size);
b2 = (PRO100_OP_BLOCK *)op;
// バックアップを復元
b2->el = src_el;
b2->s = src_s;
b2->i = src_i;
b2->link_address = src_link_addr;
// メモリ解放
pro100_free_page((void *)b, ptr);
if (timeouted && src_i)
{
//pro100_generate_int(ctx);
}
}
// オペレーションのデータサイズの取得
UINT pro100_get_op_size(UINT op, void *data)
{
UINT ret = sizeof(PRO100_OP_BLOCK);
PRO100_OP_MCAST_ADDR_SETUP *mc = (PRO100_OP_MCAST_ADDR_SETUP *)data;
switch (op)
{
case PRO100_CU_OP_NOP:
break;
case PRO100_CU_OP_IA_SETUP:
ret += 6;
break;
case PRO100_CU_OP_CONFIG:
ret += 22 + 9;
break;
case PRO100_CU_OP_MCAST_ADDR_SETUP:
ret += mc->count + 2 + 2;
break;
case PRO100_CU_OP_LOAD_MICROCODE:
ret += 256;
break;
case PRO100_CU_OP_DUMP:
ret += 4;
break;
case PRO100_CU_OP_DIAG:
break;
}
return ret;
}
// CU と RU が受付可能になるまで待機
void pro100_wait_cu_ru_accepable(PRO100_CTX *ctx)
{
bool flag = false;
// 引数チェック
if (ctx == NULL)
{
return;
}
//debugprint("{B");
while (true)
{
UINT t = pro100_read(ctx, PRO100_CSR_OFFSET_SCB_COMMAND_WORD_0, 1);
if (t == 0)
{
break;
}
if (flag == false)
{
flag = true;
// debugprint(" ** CU=%u, RU=%u ** \n",
// PRO100_GET_CU_COMMAND(t), PRO100_GET_RU_COMMAND(t));
}
}
// debugprint("} ");
}
// 物理的にパケットを送信する
void pro100_send_packet_to_line(PRO100_CTX *ctx, void *buf, UINT size)
{
volatile PRO100_OP_BLOCK *b;
PRO100_OP_TRANSMIT *t;
phys_t ptr;
// 引数チェック
if (ctx == NULL || buf == NULL || size == 0)
{
return;
}
// メモリ確保
b = pro100_alloc_page(&ptr);
t = (PRO100_OP_TRANSMIT *)b;
t->op_block.op = PRO100_CU_OP_TRANSMIT;
t->op_block.transmit_flexible_mode = 0;
t->op_block.transmit_raw_packet = 0;
t->op_block.transmit_cid = 31;
t->op_block.i = false;
t->op_block.s = false;
t->op_block.el = true;
t->op_block.link_address = 0;
t->tbd_array_address = 0xffffffff;
t->data_bytes = size;
t->threshold = 1;
SeCopy(((UCHAR *)b) + sizeof(PRO100_OP_TRANSMIT) +
(ctx->use_standard_txcb ? 0 : sizeof(PRO100_TBD) * 2), buf, size);
if (false)
{
char tmp[8000];
SeBinToStrEx(tmp, sizeof(tmp), (void *)b, size + sizeof(PRO100_OP_TRANSMIT));
debugprint("%s\n\n", tmp);
}
pro100_init_cu_base_addr(ctx);
pro100_wait_cu_ru_accepable(ctx);
pro100_write(ctx, PRO100_CSR_OFFSET_SCB_GENERAL_POINTER, (UINT)ptr, 4);
pro100_write(ctx, PRO100_CSR_OFFSET_SCB_COMMAND_WORD_0,
PRO100_MAKE_CU_RU_COMMAND(PRO100_CU_CMD_START, PRO100_RU_CMD_NOOP), 1);
pro100_flush(ctx);
//debugprint("1\n");
while (b->c == false);
//debugprint("2\n");
pro100_free_page((void *)b, ptr);
}
// 送信しようとしているパケットを読み取る
UINT pro100_read_send_packet(PRO100_CTX *ctx, phys_t addr, void *buf)
{
PRO100_OP_BLOCK_MAX b;
PRO100_OP_TRANSMIT *t;
UINT ret = 0;
// 引数チェック
if (ctx == NULL || addr == 0 || buf == NULL)
{
return 0;
}
// メモリから読み取る
pro100_mem_read(&b, addr, sizeof(b));
t = (PRO100_OP_TRANSMIT *)&b;
if (t->op_block.transmit_flexible_mode == 0)
{
// Simplified モード
ret = t->data_bytes;
if (ret > PRO100_MAX_PACKET_SIZE)
{
// パケットサイズが大きすぎる
return 0;
}
// パケットデータのコピー
SeCopy(buf, ((UCHAR *)&b) + sizeof(PRO100_OP_TRANSMIT), ret);
return ret;
}
else
{
UINT num_tbd = t->tbd_count;
PRO100_TBD *tbd_array = SeZeroMalloc(num_tbd * sizeof(PRO100_TBD));
UINT total_packet_size;
UINT i;
if (ctx->use_standard_txcb)
{
// Standard Flexible モード
pro100_mem_read(tbd_array, (phys_t)t->tbd_array_address, num_tbd * sizeof(PRO100_TBD));
}
else
{
// Extended Flexible モード: 最初の 2 個を末尾から読む
UINT num_tbd_ex = num_tbd;
if (num_tbd_ex >= 2)
{
num_tbd_ex = 2;
}
SeCopy(tbd_array, ((UCHAR *)&b) + sizeof(PRO100_OP_TRANSMIT), num_tbd_ex * sizeof(PRO100_TBD));
// 残りがある場合は TBD アレイアドレスから読み込む
if ((num_tbd - num_tbd_ex) >= 1)
{
pro100_mem_read(((UCHAR *)tbd_array) + num_tbd_ex * sizeof(PRO100_TBD),
(phys_t)t->tbd_array_address,
(num_tbd - num_tbd_ex) * sizeof(PRO100_TBD));
}
}
// TBD アレイアドレスを用いてパケットを読み込む
total_packet_size = 0;
for (i = 0;i < num_tbd;i++)
{
PRO100_TBD *tbd = &tbd_array[i];
total_packet_size += tbd->size;
}
if (total_packet_size > PRO100_MAX_PACKET_SIZE)
{
// パケットサイズが大きすぎる
ret = 0;
}
else
{
// パケットデータを読み込む
UCHAR *current_ptr = buf;
for (i = 0;i < num_tbd;i++)
{
PRO100_TBD *tbd = &tbd_array[i];
pro100_mem_read(current_ptr, (phys_t)tbd->data_address, tbd->size);
current_ptr += tbd->size;
}
ret = total_packet_size;
}
SeFree(tbd_array);
return ret;
}
}
// ゲスト OS が開始したオペレーションを処理する
void pro100_proc_guest_op(PRO100_CTX *ctx)
{
phys_t ptr;
// 引数チェック
if (ctx == NULL)
{
return;
}
if (ctx->guest_cu_started == false || ctx->guest_cu_current_pointer == 0)
{
return;
}
ptr = ctx->guest_cu_current_pointer;
// 終端まで読み取る
while (true)
{
PRO100_OP_BLOCK_MAX b;
if (ctx->guest_cu_suspended)
{
// サスペンド中の場合、前回の最後のオペレーションの S ビットがクリア
// されたかどうか検査する
pro100_mem_read(&b, ptr, sizeof(b));
if (b.op_block.s)
{
// サスペンド継続中
break;
}
ptr = ctx->guest_cu_next_pointer;
ctx->guest_cu_suspended = false;
}
pro100_mem_read(&b, ptr, sizeof(b));
if (b.op_block.op == PRO100_CU_OP_TRANSMIT)
{
// 送信処理
PRO100_OP_TRANSMIT *t = (PRO100_OP_TRANSMIT *)&b;
if (false)
{
debugprint("flexible_mode=%u, raw_packet=%u, cid=%u, array=0x%x, thres=%u, tcount=%u, size=%u\n",
t->op_block.transmit_flexible_mode,
t->op_block.transmit_raw_packet,
t->op_block.transmit_cid,
t->tbd_array_address,
t->threshold,
t->tbd_count,
t->data_bytes);
}
//debugprint("SEND\n");
if (false)
{
char tmp[4096];
SeBinToStrEx(tmp, sizeof(tmp), &b, 24);
debugprint("%s\n", tmp);
}
if (true)
{
UCHAR buf[PRO100_MAX_PACKET_SIZE];
UINT packet_size = pro100_read_send_packet(ctx, ptr, buf);
#ifdef PRO100_PASS_MODE
pro100_write_recv_packet(ctx, buf, packet_size);
#else // PRO100_PASS_MODE
if (ctx->CallbackRecvVirtNic != NULL)
{
void *packet_data[1];
UINT packet_sizes[1];
packet_data[0] = buf;
packet_sizes[0] = packet_size;
ctx->CallbackRecvVirtNic(ctx, 1, packet_data, packet_sizes, ctx->CallbackRecvVirtNicParam, NULL);
}
#endif // PRO100_PASS_MODE
}
b.op_block.ok = b.op_block.c = true;
b.op_block.transmit_overrun = false;
pro100_mem_write(ptr, &b, sizeof(PRO100_OP_BLOCK) + sizeof(PRO100_OP_TRANSMIT));
if (b.op_block.i)
{
pro100_generate_int(ctx);
}
}
else
{
// 送信以外の処理
UINT size = pro100_get_op_size(b.op_block.op, &b);
//debugprint("0x%x: OP: %u Size=%u\n", (UINT)ptr, b.op_block.op, size);
switch (b.op_block.op)
{
case PRO100_CU_OP_IA_SETUP:
// IA Setup
SeCopy(ctx->mac_address, ((UCHAR *)&b) + sizeof(PRO100_OP_BLOCK), 6);
pro100_init_vpn_client(ctx);
break;
case PRO100_CU_OP_CONFIG:
// Configure
ctx->use_standard_txcb = ((((UCHAR *)&b)[sizeof(PRO100_OP_BLOCK) + 6] & 0x10) ? true : false);
break;
}
pro100_exec_cu_op(ctx, &b, size);
pro100_mem_write(ptr, &b, size);
}
if (b.op_block.el)
{
// 終了フラグ
ctx->guest_cu_started = false;
ctx->guest_cu_current_pointer = 0;
ctx->guest_cu_suspended = false;
//debugprint("EL\n");
pro100_generate_int(ctx);
break;
}
if (b.op_block.s)
{
// サスペンドフラグ
ctx->guest_cu_suspended = true;
ctx->guest_cu_next_pointer = b.op_block.link_address;
//debugprint("SUSPEND\n");
pro100_generate_int(ctx);
break;
}
ptr = b.op_block.link_address;
}
if (ctx->guest_cu_started)
{
ctx->guest_cu_current_pointer = ptr;
}
}
// 物理メモリ読み込み
void pro100_mem_read(void *buf, phys_t addr, UINT size)
{
// 引数チェック
if (addr == 0 || buf == NULL || size == 0)
{
return;
}
mmio_gphys_access(addr, false, buf, size, MAPMEM_UC);
}
// 物理メモリ書き込み
void pro100_mem_write(phys_t addr, void *buf, UINT size)
{
// 引数チェック
if (addr == 0 || buf == NULL || size == 0)
{
return;
}
mmio_gphys_access(addr, true, buf, size, MAPMEM_UC);
}
// 書き込みフック
bool pro100_hook_write(PRO100_CTX *ctx, UINT offset, UINT data, UINT size)
{
// 引数チェック
if (ctx == NULL)
{
return false;
}
switch (offset)
{
case PRO100_CSR_OFFSET_SCB_COMMAND_WORD_0: // RU Command, CU Command
if (size != 1)
{
debugprint("pro100_hook_write: PRO100_CSR_OFFSET_SCB_COMMAND_WORD_0: BAD SIZE: %u\n", size);
}
if (size == 1)
{
UCHAR b = (UCHAR)data;
UINT ru = PRO100_GET_RU_COMMAND(b);
UINT cu = PRO100_GET_CU_COMMAND(b);
char *s1 = NULL;
char *s2 = NULL;
s1 = pro100_get_ru_command_string(ru);
s2 = pro100_get_cu_command_string(cu);
if (s1 != NULL || s2 != NULL)
{
//debugprint("[%s, %s] ", s1, s2);
}
switch (cu)
{
case PRO100_CU_CMD_NOOP:
break;
case PRO100_CU_CMD_START:
//debugprint("GUEST PRO100_CU_CMD_START: 0x%x\n", (UINT)ctx->guest_last_general_pointer);
ctx->guest_cu_started = true;
ctx->guest_cu_suspended = false;
ctx->guest_cu_start_pointer = (phys_t)ctx->guest_last_general_pointer;
ctx->guest_cu_current_pointer = ctx->guest_cu_start_pointer;
pro100_proc_guest_op(ctx);
cu = PRO100_CU_CMD_NOOP;
break;
case PRO100_CU_CMD_RESUME:
//debugprint("GUEST PRO100_CU_CMD_RESUME\n");
pro100_proc_guest_op(ctx);
cu = PRO100_CU_CMD_NOOP;
break;
case PRO100_CU_CMD_LOAD_CU_BASE:
//debugprint("GUEST PRO100_CU_CMD_LOAD_CU_BASE: 0x%x\n", (UINT)ctx->guest_last_general_pointer);
cu = PRO100_CU_CMD_NOOP;
ctx->cu_base_inited = false;
break;
case PRO100_CU_CMD_LOAD_DUMP_ADDR:
debugprint("GUEST PRO100_CU_CMD_LOAD_DUMP_ADDR: 0x%x\n", (UINT)ctx->guest_last_general_pointer);
cu = PRO100_CU_CMD_NOOP;
//pro100_write(ctx, PRO100_CSR_OFFSET_SCB_GENERAL_POINTER, (UINT)ctx->guest_last_general_pointer, 4);
ctx->guest_last_counter_pointer = ctx->guest_last_general_pointer;
break;
case PRO100_CU_CMD_DUMP_STAT:
case PRO100_CU_CMD_DUMP_AND_RESET_STAT:
debugprint(cu == PRO100_CU_CMD_DUMP_STAT ? "GUEST PRO100_CU_CMD_DUMP_STAT\n" : "GUEST PRO100_CU_CMD_DUMP_AND_RESET_STAT\n");
if (ctx->guest_last_counter_pointer != 0)
{
UINT dummy_data[21];
// ゲスト OS がカウンタデータのダンプを要求しているのでウソのデータを返す
// (これをしないと Windows 版ドライバの一部で 2 秒間のビジーループによる待ちが発生してしまう)
memset(dummy_data, 0, sizeof(dummy_data));
dummy_data[16] = dummy_data[19] = dummy_data[20] = (cu == PRO100_CU_CMD_DUMP_STAT ? 0xa005 : 0xa007);
pro100_mem_write((phys_t)ctx->guest_last_counter_pointer, dummy_data, sizeof(dummy_data));
}
else
{
debugprint("error: ctx->guest_last_counter_pointer == 0\n");
}
cu = PRO100_CU_CMD_NOOP;
break;
case PRO100_CU_CMD_HPQ_START:
//debugprint("GUEST PRO100_CU_CMD_HPQ_START: 0x%x\n", (UINT)ctx->guest_last_general_pointer);
cu = PRO100_CU_CMD_NOOP;
break;
case PRO100_CU_CMD_CU_STAT_RESUME:
//debugprint("GUEST PRO100_CU_CMD_CU_STAT_RESUME\n");
//pro100_write(ctx, PRO100_CSR_OFFSET_SCB_GENERAL_POINTER, (UINT)ctx->guest_last_general_pointer, 4);
//cu = PRO100_CU_CMD_NOOP;
break;
case PRO100_CU_CMD_HPQ_RESUME:
//debugprint("GUEST PRO100_CU_CMD_HPQ_RESUME\n");
cu = PRO100_CU_CMD_NOOP;
break;
default:
printf("!!!! GUEST SEND UNKNOWN CU CMD: %u\n", cu);
break;
}
switch (ru)
{
case PRO100_RU_CMD_NOOP:
break;
case PRO100_RU_CMD_START:
//debugprint("GUEST PRO100_RU_CMD_START: 0x%x\n", (UINT)ctx->guest_last_general_pointer);
ru = PRO100_CU_CMD_NOOP;
// ゲスト OS が指定してきた受信バッファのポインタを記憶する
ctx->guest_rfd_current = ctx->guest_rfd_first = (phys_t)ctx->guest_last_general_pointer;
pro100_poll_ru(ctx);
break;
case PRO100_RU_CMD_RESUME:
//debugprint("GUEST PRO100_RU_CMD_RESUME\n");
ru = PRO100_CU_CMD_NOOP;
ctx->guest_ru_suspended = false;
pro100_poll_ru(ctx);
break;
case PRO100_RU_CMD_LOAD_HEADER_DATA_SIZE:
//debugprint("GUEST PRO100_RU_CMD_LOAD_HEADER_DATA_SIZE: 0x%x\n", (UINT)ctx->guest_last_general_pointer);
ru = PRO100_CU_CMD_NOOP;
break;
case PRO100_RU_CMD_LOAD_RU_BASE:
//debugprint("GUEST PRO100_RU_CMD_LOAD_RU_BASE: 0x%x\n", (UINT)ctx->guest_last_general_pointer);
ru = PRO100_CU_CMD_NOOP;
ctx->ru_base_inited = false;
break;
default:
printf("!!!! GUEST SEND UNKNOWN RU CMD: %u\n", ru);
break;
}
ru = 0;
b = PRO100_MAKE_CU_RU_COMMAND(cu, ru);
if ((cu != 0 || ru != 0) && (cu == 0 || ru == 0))
{
debugprint("<P");
pro100_wait_cu_ru_accepable(ctx);
debugprint(">");
pro100_write(ctx, PRO100_CSR_OFFSET_SCB_GENERAL_POINTER, (UINT)ctx->guest_last_general_pointer, 4);
pro100_write(ctx, PRO100_CSR_OFFSET_SCB_COMMAND_WORD_0, b, 1);
debugprint("<Y");
pro100_wait_cu_ru_accepable(ctx);
debugprint(">");
}
return true;
}
break;
case PRO100_CSR_OFFSET_SCB_COMMAND_WORD_1: // 割り込み制御ビット
if (size != 1)
{
debugprint("pro100_hook_write: PRO100_CSR_OFFSET_SCB_COMMAND_WORD_1: BAD SIZE: %u\n", size);
}
if (size == 1)
{
ctx->int_mask_guest_set = data;
if (true)
{
//PRO100_INT_BIT *ib = (PRO100_INT_BIT *)&ctx->int_mask_guest_set;
//debugprint("int mask: M=%u 0x%x\n", (UINT)ib->mask_all, (UINT)ctx->int_mask_guest_set);
/*
ib->mask_all = true;
ib->fcp = ib->er = ib->rnr = ib->cna = ib->fr = ib->cx = 1;
*/
//pro100_beep((ib->mask_all == 0 ? 880 : 440), 200);
pro100_write(ctx, PRO100_CSR_OFFSET_SCB_COMMAND_WORD_1,
ctx->int_mask_guest_set, 4);
return true;
}
}
break;
case PRO100_CSR_OFFSET_SCB_GENERAL_POINTER: // 汎用ポインタ
if (size != 4)
{
debugprint("pro100_hook_write: PRO100_CSR_OFFSET_SCB_GENERAL_POINTER: BAD SIZE: %u\n", size);
}
if (size == 4)
{
ctx->guest_last_general_pointer = data;
//debugprint("GUEST WRITES POINTER: 0x%x\n", data);
}
return true;
case PRO100_CSR_OFFSET_SCB_STATUS_WORD_1: // STAT/ACK
/*if (size == 1)
{
debugprint("<ACK>");
if (data != 0)
{
pro100_write(ctx, PRO100_CSR_OFFSET_SCB_STATUS_WORD_1, 0xff, 1);
}
return true;
}*/
pro100_poll_ru(ctx);
break;
case PRO100_CSR_OFFSET_SCB_PORT: // ポート
break;
case PRO100_CSR_OFFSET_SCB_MDI: // MDI
break;
default:
if (offset == 0 && size == 2)
{
UCHAR buf[4];
*((UINT *)buf) = data;
pro100_hook_write(ctx, 1, buf[1], 1);
return true;
}
else
{
if (offset < 0x10)
{
printf("*** WRITE ACCESS TO 0x%x size=%u data=0x%x\n", offset, size, data);
}
}
break;
}
return false;
}
// 読み取りフック
bool pro100_hook_read(PRO100_CTX *ctx, UINT offset, UINT size, UINT *data)
{
// 引数チェック
if (ctx == NULL)
{
return false;
}
switch (offset)
{
case PRO100_CSR_OFFSET_SCB_COMMAND_WORD_0: // コマンド実行状態
if (size != 1)
{
debugprint("pro100_hook_read: PRO100_CSR_OFFSET_SCB_COMMAND_WORD_0: BAD SIZE: %u\n", size);
}
if (size == 1)
{
UINT t = pro100_read(ctx, PRO100_CSR_OFFSET_SCB_COMMAND_WORD_0, 1);
t = PRO100_MAKE_CU_RU_COMMAND(0, 0);
*data = t;
return true;
}
break;
case PRO100_CSR_OFFSET_SCB_STATUS_WORD_0: // RU と CU のステータス
if (size != 1)
{
if (size != 2)
{
debugprint("pro100_hook_read: PRO100_CSR_OFFSET_SCB_STATUS_WORD_0: BAD SIZE: %u\n", size);
}
else
{
UINT data1 = 0, data2 = 0;
UCHAR data3[4];
pro100_hook_read(ctx, PRO100_CSR_OFFSET_SCB_STATUS_WORD_0, 1, &data1);
pro100_hook_read(ctx, PRO100_CSR_OFFSET_SCB_STATUS_WORD_1, 1, &data2);
data3[0] = (UCHAR)data1;
data3[1] = (UCHAR)data2;
data3[2] = data3[3] = 0;
*data = *((UINT *)data3);
//debugprint("*data = 0x%x\n", *data);
return true;
}
}
if (size == 1)
{
UINT t = pro100_read(ctx, PRO100_CSR_OFFSET_SCB_STATUS_WORD_0, 1);
PRO100_SCB_STATUS_WORD_BIT *sb = (PRO100_SCB_STATUS_WORD_BIT *)(void *)&t;
sb->ru_status = 2;
*data = t;
return true;
}
break;
case PRO100_CSR_OFFSET_SCB_STATUS_WORD_1: // STAT/ACK
pro100_poll_ru(ctx);
if (size != 1)
{
debugprint("pro100_hook_read: PRO100_CSR_OFFSET_SCB_STATUS_WORD_1: BAD SIZE: %u\n", size);
}
if (size == 1)
{
UINT t;
PRO100_STAT_ACK *sa;
//debugprint("<INT>");
pro100_proc_guest_op(ctx);
t = pro100_read(ctx, offset, size);
pro100_write(ctx, offset, t, 1);
sa = (PRO100_STAT_ACK *)(void *)&t;
sa->cna = sa->cx_tno = sa->fr = sa->rnr = sa->mdi = sa->swi = sa->fcp = 1;
sa->rnr = 0;
*data = t;
return true;
}
break;
case PRO100_CSR_OFFSET_SCB_COMMAND_WORD_1: // 割り込み制御ビット
if (size != 1)
{
debugprint("pro100_hook_read: PRO100_CSR_OFFSET_SCB_COMMAND_WORD_1: BAD SIZE: %u\n", size);
}
if (size == 1)
{
UINT t = ctx->int_mask_guest_set;
PRO100_INT_BIT *bi = (PRO100_INT_BIT *)(void *)&t;
bi->si = false;
*data = t;
return true;
}
break;
case PRO100_CSR_OFFSET_SCB_GENERAL_POINTER: // 汎用ポインタ
if (size != 4)
{
debugprint("pro100_hook_read: PRO100_CSR_OFFSET_SCB_GENERAL_POINTER: BAD SIZE: %u\n", size);
}
if (size == 4)
{
*data = ctx->guest_last_general_pointer;
return true;
}
break;
default:
if (offset < 0x10)
{
printf("*** READ ACCESS TO 0x%x size=%u\n", offset, size);
}
break;
}
return false;
}
// 書き込み操作のフラッシュ
void pro100_flush(PRO100_CTX *ctx)
{
// 引数チェック
if (ctx == NULL)
{
return;
}
pro100_read(ctx, PRO100_CSR_OFFSET_SCB_STATUS_WORD_0, 1);
}
// 書き込み
void pro100_write(PRO100_CTX *ctx, UINT offset, UINT data, UINT size)
{
// 引数チェック
if (ctx == NULL)
{
return;
}
switch (size) {
case 1:
dres_reg_write8 (ctx->r_mm, offset, data);
break;
case 2:
dres_reg_write16 (ctx->r_mm, offset, data);
break;
case 4:
dres_reg_write32 (ctx->r_mm, offset, data);
break;
default:
panic ("%s(): read len %u\n", __func__, size);
}
}
// 読み取り
UINT pro100_read(PRO100_CTX *ctx, UINT offset, UINT size)
{
UINT data = 0;
// 引数チェック
if (ctx == NULL)
{
return 0;
}
switch (size) {
case 1:
dres_reg_read8 (ctx->r_mm, offset, &data);
break;
case 2:
dres_reg_read16 (ctx->r_mm, offset, &data);
break;
case 4:
dres_reg_read32 (ctx->r_mm, offset, &data);
break;
default:
panic ("%s(): read len %u\n", __func__, size);
}
return data;
}
// 割り込みを発生させる
void pro100_generate_int(PRO100_CTX *ctx)
{
PRO100_INT_BIT ib;
// 引数チェック
if (ctx == NULL)
{
return;
}
SeCopy(&ib, &ctx->int_mask_guest_set, sizeof(UINT));
if (ib.mask_all != 0)
{
return;
}
ib.si = 1;
//debugprint("*");
pro100_write(ctx, PRO100_CSR_OFFSET_SCB_COMMAND_WORD_1, *((UINT *)(void *)&ib), 1);
}
// CSR レジスタ用 MMIO ハンドラ
static enum dres_reg_ret_t
pro100_mm_handler (const struct dres_reg *r, void *handle, phys_t offset,
bool wr, void *buf, uint len)
{
int ret = 0;
PRO100_CTX *ctx = (PRO100_CTX *)handle;
spinlock_lock (&ctx->lock);
// 範囲チェック
if (offset < (UINT64)PRO100_CSR_SIZE)
{
if (len == 1 || len == 2 || len == 4)
{
if (wr == 0)
{
UINT ret_data = 0;
if (pro100_hook_read(ctx, offset, len, &ret_data) == false)
{
ret_data = pro100_read(ctx, offset, len);
}
if (len == 1)
{
*((UCHAR *)buf) = (UCHAR)ret_data;
}
else if (len == 2)
{
*((USHORT *)buf) = (USHORT)ret_data;
}
else if (len == 4)
{
*((UINT *)buf) = (UINT)ret_data;
}
}
else
{
UINT data = 0;
if (len == 1)
{
data = (UINT)(*((UCHAR *)buf));
}
else if (len == 2)
{
data = (UINT)(*((USHORT *)buf));
}
else if (len == 4)
{
data = (UINT)(*((UINT *)buf));
}
if (pro100_hook_write(ctx, offset, data, len) == false)
{
pro100_write(ctx, offset, data, len);
}
}
ret = 1;
}
}
spinlock_unlock (&ctx->lock);
return ret;
}
// 割り込み制御ビットを文字列に変換 (デバッグ用)
void pro100_get_int_bit_string(char *str, UCHAR value)
{
PRO100_INT_BIT *ib = (PRO100_INT_BIT *)&value;
snprintf(str, 1024, "M=%u SI=%u FCP=%u ER=%u RNR=%u CNA=%u FR=%u CX=%u",
ib->mask_all, ib->si, ib->fcp, ib->er, ib->rnr, ib->cna, ib->fr, ib->cx);
}
// STAT/ACK ビットを文字列に変換 (デバッグ用)
void pro100_get_stat_ack_string(char *str, UCHAR value)
{
PRO100_STAT_ACK *sa = (PRO100_STAT_ACK *)&value;
snprintf(str, 1024, "FCP=%u SWI=%u MDI=%u RNR=%u CNA=%u FR=%u CX=%u",
sa->fcp, sa->swi, sa->mdi, sa->rnr, sa->cna, sa->fr, sa->cx_tno);
}
// CU コマンドを文字列に変換 (デバッグ用)
char *pro100_get_cu_command_string(UINT cu)
{
char *s = NULL;
switch (cu)
{
case PRO100_CU_CMD_START:
s = "CU Start";
break;
case PRO100_CU_CMD_RESUME:
s = "CU Resume";
break;
case PRO100_CU_CMD_HPQ_START:
s = "CU HPQ Start";
break;
case PRO100_CU_CMD_LOAD_DUMP_ADDR:
s = "Load Dump Addr";
break;
case PRO100_CU_CMD_DUMP_STAT:
s = "Dump Stat";
break;
case PRO100_CU_CMD_LOAD_CU_BASE:
s = "Load CU Base";
break;
case PRO100_CU_CMD_DUMP_AND_RESET_STAT:
s = "Dump and Reset Stat";
break;
case PRO100_CU_CMD_CU_STAT_RESUME:
s = "CU Stat Resume";
break;
case PRO100_CU_CMD_HPQ_RESUME:
s = "CU HPQ Resume";
break;
case 8: s = "Unknown 8"; break;
case 9: s = "Unknown 9"; break;
case 12: s = "Unknown 12"; break;
case 13: s = "Unknown 13"; break;
case 14: s = "Unknown 14"; break;
case 15: s = "Unknown 15"; break;
}
return s;
}
// RU コマンドを文字列に変換 (デバッグ用)
char *pro100_get_ru_command_string(UINT ru)
{
char *s1 = NULL;
switch (ru)
{
case PRO100_RU_CMD_START:
s1 = "RU Start";
break;
case PRO100_RU_CMD_RESUME:
s1 = "RU Resume";
break;
case PRO100_RU_CMD_RECV_DMA_REDIRECT:
s1 = "Receive DMA Redirect";
break;
case PRO100_RU_CMD_ABORT:
s1 = "RU Abort";
break;
case PRO100_RU_CMD_LOAD_HEADER_DATA_SIZE:
s1 = "Load Header Data Size";
break;
case PRO100_RU_CMD_LOAD_RU_BASE:
s1 = "Load RU Base";
break;
case PRO100_RU_CMD_RBD_RESUME:
s1 = "RBD Resume";
break;
}
return s1;
}
// I/O ハンドラ: 未使用
static enum dres_reg_ret_t
pro100_io_handler (const struct dres_reg *r, void *handle, phys_t offset,
bool wr, void *buf, uint len)
{
debugprint ("IO offset=%u, size=%u, dir=%u\n", (UINT)offset,
(UINT)len, (UINT)wr);
return DRES_REG_RET_PASSTHROUGH;
}
// PCI コンフィグレーションレジスタの読み込み処理
int
pro100_config_read (struct pci_device *dev, u8 iosize, u16 offset,
union mem *data)
{
pci_handle_default_config_read (dev, iosize, offset, data);
return CORE_IO_RET_DONE;
}
// PCI コンフィグレーションレジスタの書き込み処理
int
pro100_config_write (struct pci_device *dev, u8 iosize, u16 offset,
union mem *data)
{
PRO100_CTX *ctx = (PRO100_CTX *)dev->host;
UINT mode = 0;
UINT addr = 0;
if (iosize == 4)
{
switch (offset)
{
case PRO100_PCI_CONFIG_32_CSR_MMAP_ADDR_REG:
if (data->dword != 0 && data->dword != 0xffffffff)
{
mode = 1;
}
break;
case PRO100_PCI_CONFIG_32_CSR_IO_ADDR_REG:
if (data->dword != 0 && data->dword != 0xffffffff)
{
mode = 2;
}
break;
}
}
pci_handle_default_config_write (dev, iosize, offset, data);
switch (mode)
{
case 1:
addr = dev->config_space.regs32[PRO100_PCI_CONFIG_32_CSR_MMAP_ADDR_REG / sizeof(UINT)] & PCI_CONFIG_BASE_ADDRESS_MEMMASK;
//if (addr < 0xF0000000)
{
if (ctx->csr_mm_addr != addr)
{
DWORD old_mm_addr = ctx->csr_mm_addr;
// NIC の CSR レジスタのメモリアドレスが変更された
ctx->csr_mm_addr = addr;
if (old_mm_addr != 0)
{
if (ctx->r_mm != NULL)
{
// 古い MMIO ハンドラが登録されている場合は解除する
dres_reg_unregister_handler (ctx->r_mm);
dres_reg_free (ctx->r_mm);
}
}
ctx->r_mm = dres_reg_alloc (addr, 64,
DRES_REG_TYPE_MM, pci_dres_reg_translate, ctx->dev, 0);
dres_reg_register_handler (ctx->r_mm,
pro100_mm_handler, ctx);
debugprint("vpn_pro100: mmio_register 0x%x\n", ctx->csr_mm_addr);
}
}
break;
case 2:
if (true)
{
addr = dev->config_space.regs32[PRO100_PCI_CONFIG_32_CSR_IO_ADDR_REG / sizeof(UINT)] & PCI_CONFIG_BASE_ADDRESS_IOMASK;
if (ctx->csr_io_addr != addr)
{
// NIC の CSR レジスタの I/O アドレスが変更された
UINT old_io_addr = ctx->csr_io_addr;
ctx->csr_io_addr = addr;
if (old_io_addr)
{
dres_reg_unregister_handler (ctx->r_io);
dres_reg_free (ctx->r_io);
}
ctx->r_io = dres_reg_alloc (addr,
PRO100_CSR_SIZE, DRES_REG_TYPE_IO, pci_dres_reg_translate, ctx->dev, 0);
dres_reg_register_handler (ctx->r_io,
pro100_io_handler, ctx);
}
}
break;
}
return CORE_IO_RET_DONE;
}
// 受信バッファの確保
void pro100_alloc_recv_buffer(PRO100_CTX *ctx)
{
UINT i;
// 引数チェック
if (ctx == NULL)
{
return;
}
ctx->num_recv = PRO100_NUM_RECV_BUFFERS;
ctx->recv = SeMalloc(sizeof(PRO100_RECV) * ctx->num_recv);
for (i = 0;i < ctx->num_recv;i++)
{
PRO100_RECV *r = &ctx->recv[i];
// メモリ確保
r->rfd = (PRO100_RFD *)pro100_alloc_page(&r->rfd_ptr);
}
// データ構造の初期化
pro100_init_recv_buffer(ctx);
}
// 受信バッファの初期化
void pro100_init_recv_buffer(PRO100_CTX *ctx)
{
UINT i;
// 引数チェック
if (ctx == NULL)
{
return;
}
// リンクリスト構造の初期化
for (i = 0;i < ctx->num_recv;i++)
{
PRO100_RECV *r = &ctx->recv[i];
PRO100_RFD *rfd = r->rfd;
SeZero(rfd, sizeof(PRO100_RFD));
rfd->buffer_size = PRO100_MAX_PACKET_SIZE;
if (i != (ctx->num_recv - 1))
{
r->next_recv = &ctx->recv[i + 1];
rfd->s = rfd->el = false;
rfd->link_address = (UINT)r->next_recv->rfd_ptr;
}
else
{
r->next_recv = NULL;
rfd->link_address = 0;
rfd->s = false;
rfd->el = true;
}
}
ctx->first_recv = &ctx->recv[0];
ctx->last_recv = &ctx->recv[ctx->num_recv - 1];
ctx->current_recv = ctx->first_recv;
}
// 新しいデバイスの発見
void pro100_new(struct pci_device *dev)
{
PRO100_CTX *ctx = SeZeroMalloc(sizeof(PRO100_CTX));
if (dev->as_dma != as_passvm)
panic ("%s: IOMMU pass-through is not supported", __func__);
debugprint ("pro100_new\n");
pci_vtd_trans_add_remap_with_vmm_mem (dev);
ctx->dev = dev;
ctx->net_handle = net_new_nic (dev->driver_options[0], false);
spinlock_init (&ctx->lock);
dev->host = ctx;
dev->driver->options.use_base_address_mask_emulation = 1;
pro100_alloc_recv_buffer(ctx);
if (pro100_ctx == NULL)
{
pro100_ctx = ctx;
}
else
{
debugprint("Error: Two or more pro100 devices found.\n");
pro100_beep(1234, 5000);
}
}
static struct pci_driver vpn_pro100_driver =
{
.name = driver_name,
.longname = driver_longname,
.driver_options = "net",
.device = "id=8086:1229",
.new = pro100_new,
.config_read = pro100_config_read,
.config_write = pro100_config_write,
};
// 初期化
void pro100_init()
{
debugprint("pro100_init() start.\n");
pci_register_driver(&vpn_pro100_driver);
debugprint("pro100_init() end.\n");
}
PCI_DRIVER_INIT(pro100_init);
</document_content>
</document>
<document index="4">
<source>./drivers/net/pro100.h</source>
<document_content>
/*
* Copyright (c) 2007, 2008 University of Tsukuba
* Copyright (C) 2007, 2008
* National Institute of Information and Communications Technology
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
* 3. Neither the name of the University of Tsukuba nor the names of its
* contributors may be used to endorse or promote products derived from
* this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef _CORE_VPN_PRO100_H
#define _CORE_VPN_PRO100_H
#include <core/types.h>
#include <io.h>
#include <pci.h>
#define UCHAR unsigned char
#define UINT unsigned int
#define DWORD unsigned int
struct dres_reg;
/// 定数
#define PRO100_MAX_OP_BLOCK_SIZE (PRO100_MAX_PACKET_SIZE + 16)
#define PRO100_MAX_PACKET_SIZE 1568
#define PRO100_NUM_RECV_BUFFERS 256
#define PRO100_PCI_CONFIG_32_CSR_MMAP_ADDR_REG 16
#define PRO100_PCI_CONFIG_32_CSR_IO_ADDR_REG 20
#define PRO100_CSR_SIZE 64
#define PRO100_CSR_OFFSET_SCB_STATUS_WORD_0 0
#define PRO100_CSR_OFFSET_SCB_STATUS_WORD_1 1
#define PRO100_CSR_OFFSET_SCB_COMMAND_WORD_0 2
#define PRO100_CSR_OFFSET_SCB_COMMAND_WORD_1 3
#define PRO100_CSR_OFFSET_SCB_GENERAL_POINTER 4
#define PRO100_CSR_OFFSET_SCB_PORT 8
#define PRO100_CSR_OFFSET_SCB_MDI 16
#define PRO100_CU_CMD_NOOP 0
#define PRO100_CU_CMD_START 1
#define PRO100_CU_CMD_RESUME 2
#define PRO100_CU_CMD_HPQ_START 3
#define PRO100_CU_CMD_LOAD_DUMP_ADDR 4
#define PRO100_CU_CMD_DUMP_STAT 5
#define PRO100_CU_CMD_LOAD_CU_BASE 6
#define PRO100_CU_CMD_DUMP_AND_RESET_STAT 7
#define PRO100_CU_CMD_CU_STAT_RESUME 10
#define PRO100_CU_CMD_HPQ_RESUME 11
#define PRO100_RU_CMD_NOOP 0
#define PRO100_RU_CMD_START 1
#define PRO100_RU_CMD_RESUME 2
#define PRO100_RU_CMD_RECV_DMA_REDIRECT 3
#define PRO100_RU_CMD_ABORT 4
#define PRO100_RU_CMD_LOAD_HEADER_DATA_SIZE 5
#define PRO100_RU_CMD_LOAD_RU_BASE 6
#define PRO100_RU_CMD_RBD_RESUME 7
#define PRO100_CU_OP_NOP 0
#define PRO100_CU_OP_IA_SETUP 1
#define PRO100_CU_OP_CONFIG 2
#define PRO100_CU_OP_MCAST_ADDR_SETUP 3
#define PRO100_CU_OP_TRANSMIT 4
#define PRO100_CU_OP_LOAD_MICROCODE 5
#define PRO100_CU_OP_DUMP 6
#define PRO100_CU_OP_DIAG 7
// マクロ
#define PRO100_GET_RU_COMMAND(x) ((x) & 0x07)
#define PRO100_GET_CU_COMMAND(x) ((((UINT)(x)) >> 4) & 0x0f)
#define PRO100_MAKE_CU_RU_COMMAND(cu, ru) (((((cu) & 0x0f) << 4) & 0xf0) | ((ru) & 0x07))
/// 構造体
// RFD
typedef struct
{
UINT status : 13;
UINT ok : 1;
UINT zero1 : 1;
UINT c : 1;
UINT zero2 : 3;
UINT sf : 1;
UINT h : 1;
UINT zero3 : 9;
UINT s : 1;
UINT el : 1;
UINT link_address;
UINT reserved;
UINT recved_bytes : 14;
UINT f : 1;
UINT eof : 1;
UINT buffer_size : 14;
UINT zero4 : 2;
UCHAR data[PRO100_MAX_PACKET_SIZE];
} PRO100_RFD;
// 受信バッファ
typedef struct PRO100_RECV
{
phys_t rfd_ptr; // RFD の物理アドレス
PRO100_RFD *rfd; // RFD の仮想アドレス
struct PRO100_RECV *next_recv; // 次の受信バッファ
} PRO100_RECV;
// コンテキスト
typedef struct
{
struct pci_device *dev; // デバイス
DWORD csr_mm_addr; // MMIO アドレス
DWORD csr_io_addr; // IO ポートアドレス
struct dres_reg *r_io; // IO ハンドラ
struct dres_reg *r_mm; // MMIO ハンドラ
UINT int_mask_guest_set; // ゲスト OS が設定した割り込みビット
bool guest_cu_started; // ゲスト OS によって CU が開始されたかどうか
bool guest_cu_suspended; // ゲスト OS によって CU がサスペンドされたかどうか
UINT guest_last_general_pointer; // ゲスト OS によって最後にポインタに書かれたデータ
UINT guest_last_counter_pointer; // ゲスト OS によって最後にカウンタデータアドレスが書かれたデータ
phys_t guest_cu_start_pointer; // 先頭のゲスト OS によるオペレーションへのポインタ
phys_t guest_cu_current_pointer; // 現在のゲスト OS によるオペレーションへのポインタ
phys_t guest_cu_next_pointer; // 次のゲスト OS によるオペレーションへのポインタ (Suspend 時)
spinlock_t lock; // ロック
UCHAR mac_address[6]; // MAC アドレス
UCHAR padding1[2];
bool use_standard_txcb; // Extended TxCB を使用しない
bool cu_base_inited;
bool ru_base_inited;
bool host_ru_started; // RU を既に開始したかどうか
UINT num_recv; // 受信バッファ数
PRO100_RECV *recv; // 受信バッファ配列
PRO100_RECV *first_recv; // 最初の受信バッファ
PRO100_RECV *last_recv; // 最後の受信バッファ
PRO100_RECV *current_recv; // 現在注目している受信バッファ
phys_t guest_rfd_first; // ゲスト OS が指定した最初の受信バッファ
phys_t guest_rfd_current; // 現在注目している受信バッファ
bool guest_ru_suspended; // ゲスト OS によって RU がサスペンドされたかどうか
bool vpn_inited; // VPN Client が初期化されているかどうか
struct netdata *net_handle; // VPN Client のハンドル
net_recv_callback_t *CallbackRecvPhyNic; // 物理 NIC からパケットを受信した際のコールバック
void *CallbackRecvPhyNicParam; // 物理 NIC からパケットを受信した際のコールバックのパラメータ
net_recv_callback_t *CallbackRecvVirtNic; // ゲスト OS がパケットを送信しようとした際のコールバック
void *CallbackRecvVirtNicParam; // ゲスト OS がパケットを送信しようとした際のコールバックのパラメータ
} PRO100_CTX;
// STAT/ACK レジスタ
typedef struct
{
UINT fcp : 1;
UINT reserved : 1;
UINT swi : 1;
UINT mdi : 1;
UINT rnr : 1;
UINT cna : 1;
UINT fr : 1;
UINT cx_tno : 1;
} PRO100_STAT_ACK;
// SCB ステータスワードビット
typedef struct
{
UINT reserved1 : 1;
UINT ru_status : 4;
UINT cu_status : 3;
} PRO100_SCB_STATUS_WORD_BIT;
// 割り込み制御ビット
typedef struct
{
UINT mask_all : 1;
UINT si : 1;
UINT fcp : 1;
UINT er : 1;
UINT rnr : 1;
UINT cna : 1;
UINT fr : 1;
UINT cx : 1;
} PRO100_INT_BIT;
// オペレーションブロック
typedef struct
{
UINT reserved1 : 8;
UINT reserved2 : 4;
UINT transmit_overrun : 1;
UINT ok : 1;
UINT reserved3 : 1;
UINT c : 1;
UINT op : 3;
UINT transmit_flexible_mode : 1;
UINT transmit_raw_packet : 1;
UINT reserved4 : 3;
UINT transmit_cid : 5;
UINT i : 1;
UINT s : 1;
UINT el : 1;
UINT link_address;
} PRO100_OP_BLOCK;
// コマンドブロック (最大サイズ)
typedef struct
{
PRO100_OP_BLOCK op_block;
UCHAR data[PRO100_MAX_OP_BLOCK_SIZE - sizeof(PRO100_OP_BLOCK)];
} PRO100_OP_BLOCK_MAX;
// マルチキャストアドレスセットアップオペレーション
typedef struct
{
PRO100_OP_BLOCK op_block;
UINT count : 14;
UINT reserved : 2;
} PRO100_OP_MCAST_ADDR_SETUP;
// 送信オペレーション
typedef struct
{
PRO100_OP_BLOCK op_block;
UINT tbd_array_address;
UINT data_bytes : 14;
UINT zero : 1;
UINT eof : 1;
UINT threshold : 8;
UINT tbd_count : 8;
} PRO100_OP_TRANSMIT;
// TBD
typedef struct
{
UINT data_address;
UINT size : 15;
UINT zero1 : 1;
UINT el : 1;
UINT zero2 : 15;
} PRO100_TBD;
// 関数プロトタイプ
void pro100_init();
void pro100_new(struct pci_device *pci_device);
void pro100_init_recv_buffer(PRO100_CTX *ctx);
void pro100_alloc_recv_buffer(PRO100_CTX *ctx);
int pro100_config_read (struct pci_device *pci_device, u8 iosize, u16 offset,
union mem *data);
int pro100_config_write (struct pci_device *pci_device, u8 iosize, u16 offset,
union mem *data);
bool pro100_hook_write(PRO100_CTX *ctx, UINT offset, UINT data, UINT size);
bool pro100_hook_read(PRO100_CTX *ctx, UINT offset, UINT size, UINT *data);
void pro100_write(PRO100_CTX *ctx, UINT offset, UINT data, UINT size);
void pro100_flush(PRO100_CTX *ctx);
UINT pro100_read(PRO100_CTX *ctx, UINT offset, UINT size);
void pro100_mem_read(void *buf, phys_t addr, UINT size);
void pro100_mem_write(phys_t addr, void *buf, UINT size);
void pro100_proc_guest_op(PRO100_CTX *ctx);
void pro100_wait_cu_ru_accepable(PRO100_CTX *ctx);
void pro100_exec_cu_op(PRO100_CTX *ctx, PRO100_OP_BLOCK_MAX *op, UINT size);
void pro100_init_cu_base_addr(PRO100_CTX *ctx);
void pro100_init_ru_base_addr(PRO100_CTX *ctx);
UINT pro100_get_op_size(UINT op, void *data);
void pro100_generate_int(PRO100_CTX *ctx);
void *pro100_alloc_page(phys_t *ptr);
void pro100_free_page(void *v, phys_t ptr);
void pro100_beep(UINT freq, UINT msecs);
void pro100_sleep(UINT msecs);
UINT pro100_read_send_packet(PRO100_CTX *ctx, phys_t addr, void *buf);
void pro100_send_packet_to_line(PRO100_CTX *ctx, void *buf, UINT size);
void pro100_poll_ru(PRO100_CTX *ctx);
void pro100_write_recv_packet(PRO100_CTX *ctx, void *buf, UINT size);
char *pro100_get_ru_command_string(UINT ru);
char *pro100_get_cu_command_string(UINT cu);
void pro100_get_stat_ack_string(char *str, UCHAR value);
void pro100_get_int_bit_string(char *str, UCHAR value);
PRO100_CTX *pro100_get_ctx();
void pro100_init_vpn_client(PRO100_CTX *ctx);
#endif // _CORE_VPN_PRO100_H
</document_content>
</document>
</documents>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment