Skip to content

Sep 2019

Log range 0920 - 0930.

9/26/19

Check out PR today.

create_pblock

9/25/19

Singularity and Helios

1
2
3
- I've been reading Singularity today. It has many insights on extension and isolation. Something we might be interested: 1) application has a manifest. We may want to have a similar one for each FPGA app. 2) Seal OS architecture, where the OS or app remain invariant after install. This is the nature of FPGA..
- Also, I think it's important to figure out a way to do IP sharing. The same thing is also beneficial for PR.
- Scheduling: preemptive or non-preemptive..

Thoughts after reading Singularity papers - The contract-based channel is promising - The manifest-based program approach is also promising. Similarly, the AmoghOS has some Resource Vector associated with each FPGA application. I think it’s valid and beneficial to attach such a spec with FPGA applications.

I think I need to think more on the applications. Cannot wait till its too late!

If an app is too big to fit into a FPGA, can we do “bitstream” swap? - App need to conform to some sort of model (e.g., msg-based) - Fast PR - Must be slow from app’s point, but a solution.,

0923 Monday

Continue working on FPGA stuff. Let’s focus on writing possible design ideas. I should also read some related work.

0921 Weekends

Spent some time reading ATC papers, came across quite some interesting ones.

  • Distributed actor runtime
    • I came across actors many times recently. Like the iPipe, ST-Accel.
    • There are some open-source frameworks. Erlang and akka.
    • It’s model that I should consider in the future
  • SSD Related
    • Alibaba has a study paper about SSD reliability in their datacenters.
    • Amy Tai has an interesting paper, they enable distributed storage systems to run on high error rate SSDs. Traditionally, if an SSD has a high error rate, it will impact local file system perf thus higher level system perf. Their idea is neat: utilize the remote replicas to recover local SSD errors! Thus they could use those SSDs!
    • File system on SSD study. Paper from Toronto. I haven’t read it yet.

0920 Fri

Well.. I should continue on this.

We moved to UCSD recently. Everything is setup except desktop and server stuff.

Started using F1 recently. Porting our code from VCU108 to VCU118. The migration between boards and between different vivado versions is a REAL headache. (VCU108 -> VCU118 && 2018.2 -> 2018.3)

So for those TCL scripts generated by vivado, i found it will use hardcoded IP version. Upgrading vivado means possibly updated IP versions, thus broken TCL scripts. I’ve found a way to workaround. But if the IP interface changed, it has to be modified manually.

Many things left on the table - Merge LegoOS code - Think about and finish design doc - Tons of papers to read

Life wise: sea is nearby, although UCSD gym sucks, it has jiu jitsu courses.

Let’s do the work.

  • Try make PCIe work first. Checking out xtp444, the VCU118 PCIe reference design.

Okay. Finished patching the XDC file, basically went through the example designs and check couple design docs, same old shit. Synthesis can pass. Implementation failed because Disk is full (?!).

Anyway, next step is: - Resize Disk size - Run implementation, check it can pass - Run simulation, functionality check of RDM!


Last update: September 28, 2019

Comments

Back to top